Chelsea707 commited on
Commit
fa5a6a3
·
verified ·
1 Parent(s): 10b8130

Add Batch e293d33b-de1f-443d-9c65-e13ddbd0a3e7 data

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. .gitattributes +28 -0
  2. 2023/2_n is better than n2_ Decomposing Event Coreference Resolution into Two Tractable Problems/02c65ea8-73c1-4552-88f9-a2d5a7f7c433_content_list.json +2048 -0
  3. 2023/2_n is better than n2_ Decomposing Event Coreference Resolution into Two Tractable Problems/02c65ea8-73c1-4552-88f9-a2d5a7f7c433_model.json +0 -0
  4. 2023/2_n is better than n2_ Decomposing Event Coreference Resolution into Two Tractable Problems/02c65ea8-73c1-4552-88f9-a2d5a7f7c433_origin.pdf +3 -0
  5. 2023/2_n is better than n2_ Decomposing Event Coreference Resolution into Two Tractable Problems/full.md +392 -0
  6. 2023/2_n is better than n2_ Decomposing Event Coreference Resolution into Two Tractable Problems/images.zip +3 -0
  7. 2023/2_n is better than n2_ Decomposing Event Coreference Resolution into Two Tractable Problems/layout.json +0 -0
  8. 2023/A Benchmark on Extremely Weakly Supervised Text Classification_ Reconcile Seed Matching and Prompting Approaches/3cfba471-572f-49ab-a5f2-cb3f967b96d6_content_list.json +0 -0
  9. 2023/A Benchmark on Extremely Weakly Supervised Text Classification_ Reconcile Seed Matching and Prompting Approaches/3cfba471-572f-49ab-a5f2-cb3f967b96d6_model.json +0 -0
  10. 2023/A Benchmark on Extremely Weakly Supervised Text Classification_ Reconcile Seed Matching and Prompting Approaches/3cfba471-572f-49ab-a5f2-cb3f967b96d6_origin.pdf +3 -0
  11. 2023/A Benchmark on Extremely Weakly Supervised Text Classification_ Reconcile Seed Matching and Prompting Approaches/full.md +454 -0
  12. 2023/A Benchmark on Extremely Weakly Supervised Text Classification_ Reconcile Seed Matching and Prompting Approaches/images.zip +3 -0
  13. 2023/A Benchmark on Extremely Weakly Supervised Text Classification_ Reconcile Seed Matching and Prompting Approaches/layout.json +0 -0
  14. 2023/A Call for Standardization and Validation of Text Style Transfer Evaluation/ab706f98-d210-4683-ba07-c1e317949d40_content_list.json +0 -0
  15. 2023/A Call for Standardization and Validation of Text Style Transfer Evaluation/ab706f98-d210-4683-ba07-c1e317949d40_model.json +0 -0
  16. 2023/A Call for Standardization and Validation of Text Style Transfer Evaluation/ab706f98-d210-4683-ba07-c1e317949d40_origin.pdf +3 -0
  17. 2023/A Call for Standardization and Validation of Text Style Transfer Evaluation/full.md +0 -0
  18. 2023/A Call for Standardization and Validation of Text Style Transfer Evaluation/images.zip +3 -0
  19. 2023/A Call for Standardization and Validation of Text Style Transfer Evaluation/layout.json +0 -0
  20. 2023/A Class-Rebalancing Self-Training Framework for Distantly-Supervised Named Entity Recognition/5d2ab272-62cd-4b5d-82f2-5c5c490fd685_content_list.json +0 -0
  21. 2023/A Class-Rebalancing Self-Training Framework for Distantly-Supervised Named Entity Recognition/5d2ab272-62cd-4b5d-82f2-5c5c490fd685_model.json +0 -0
  22. 2023/A Class-Rebalancing Self-Training Framework for Distantly-Supervised Named Entity Recognition/5d2ab272-62cd-4b5d-82f2-5c5c490fd685_origin.pdf +3 -0
  23. 2023/A Class-Rebalancing Self-Training Framework for Distantly-Supervised Named Entity Recognition/full.md +488 -0
  24. 2023/A Class-Rebalancing Self-Training Framework for Distantly-Supervised Named Entity Recognition/images.zip +3 -0
  25. 2023/A Class-Rebalancing Self-Training Framework for Distantly-Supervised Named Entity Recognition/layout.json +0 -0
  26. 2023/A Comparative Analysis of the Effectiveness of Rare Tokens on Creative Expression using ramBERT/14b40f42-41c5-437e-b7fd-c53ddc62efec_content_list.json +0 -0
  27. 2023/A Comparative Analysis of the Effectiveness of Rare Tokens on Creative Expression using ramBERT/14b40f42-41c5-437e-b7fd-c53ddc62efec_model.json +0 -0
  28. 2023/A Comparative Analysis of the Effectiveness of Rare Tokens on Creative Expression using ramBERT/14b40f42-41c5-437e-b7fd-c53ddc62efec_origin.pdf +3 -0
  29. 2023/A Comparative Analysis of the Effectiveness of Rare Tokens on Creative Expression using ramBERT/full.md +401 -0
  30. 2023/A Comparative Analysis of the Effectiveness of Rare Tokens on Creative Expression using ramBERT/images.zip +3 -0
  31. 2023/A Comparative Analysis of the Effectiveness of Rare Tokens on Creative Expression using ramBERT/layout.json +0 -0
  32. 2023/A Confidence-based Partial Label Learning Model for Crowd-Annotated Named Entity Recognition/1bad24cc-7495-4411-88f1-9e429d01bb74_content_list.json +2047 -0
  33. 2023/A Confidence-based Partial Label Learning Model for Crowd-Annotated Named Entity Recognition/1bad24cc-7495-4411-88f1-9e429d01bb74_model.json +0 -0
  34. 2023/A Confidence-based Partial Label Learning Model for Crowd-Annotated Named Entity Recognition/1bad24cc-7495-4411-88f1-9e429d01bb74_origin.pdf +3 -0
  35. 2023/A Confidence-based Partial Label Learning Model for Crowd-Annotated Named Entity Recognition/full.md +384 -0
  36. 2023/A Confidence-based Partial Label Learning Model for Crowd-Annotated Named Entity Recognition/images.zip +3 -0
  37. 2023/A Confidence-based Partial Label Learning Model for Crowd-Annotated Named Entity Recognition/layout.json +0 -0
  38. 2023/A Customized Text Sanitization Mechanism with Differential Privacy/cecb5923-3636-44eb-8dc9-ff69d69aa2f0_content_list.json +1847 -0
  39. 2023/A Customized Text Sanitization Mechanism with Differential Privacy/cecb5923-3636-44eb-8dc9-ff69d69aa2f0_model.json +0 -0
  40. 2023/A Customized Text Sanitization Mechanism with Differential Privacy/cecb5923-3636-44eb-8dc9-ff69d69aa2f0_origin.pdf +3 -0
  41. 2023/A Customized Text Sanitization Mechanism with Differential Privacy/full.md +341 -0
  42. 2023/A Customized Text Sanitization Mechanism with Differential Privacy/images.zip +3 -0
  43. 2023/A Customized Text Sanitization Mechanism with Differential Privacy/layout.json +0 -0
  44. 2023/A Diffusion Model for Event Skeleton Generation/98150961-6381-4a26-81e7-35b4d180926d_content_list.json +2052 -0
  45. 2023/A Diffusion Model for Event Skeleton Generation/98150961-6381-4a26-81e7-35b4d180926d_model.json +0 -0
  46. 2023/A Diffusion Model for Event Skeleton Generation/98150961-6381-4a26-81e7-35b4d180926d_origin.pdf +3 -0
  47. 2023/A Diffusion Model for Event Skeleton Generation/full.md +380 -0
  48. 2023/A Diffusion Model for Event Skeleton Generation/images.zip +3 -0
  49. 2023/A Diffusion Model for Event Skeleton Generation/layout.json +0 -0
  50. 2023/A Formal Perspective on Byte-Pair Encoding/6df33e58-ebbb-41f2-bab6-89afcc9dc353_content_list.json +0 -0
.gitattributes CHANGED
@@ -5063,3 +5063,31 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
5063
  2025/OZSpeech_[[:space:]]One-step[[:space:]]Zero-shot[[:space:]]Speech[[:space:]]Synthesis[[:space:]]with[[:space:]]Learned-Prior-Conditioned[[:space:]]Flow[[:space:]]Matching/ef88679b-44b4-4bf3-9248-5afe6d9058c0_origin.pdf filter=lfs diff=lfs merge=lfs -text
5064
  2025/ObfusLM_[[:space:]]Privacy-preserving[[:space:]]Language[[:space:]]Model[[:space:]]Service[[:space:]]against[[:space:]]Embedding[[:space:]]Inversion[[:space:]]Attacks/7ff0baf2-fd40-42b0-bd16-e086deebbd95_origin.pdf filter=lfs diff=lfs merge=lfs -text
5065
  2025/Odysseus[[:space:]]Navigates[[:space:]]the[[:space:]]Sirens’[[:space:]]Song_[[:space:]]Dynamic[[:space:]]Focus[[:space:]]Decoding[[:space:]]for[[:space:]]Factual[[:space:]]and[[:space:]]Diverse[[:space:]]Open-Ended[[:space:]]Text[[:space:]]Generation/f59c9adc-c141-4269-a78e-b790ce79ab5f_origin.pdf filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5063
  2025/OZSpeech_[[:space:]]One-step[[:space:]]Zero-shot[[:space:]]Speech[[:space:]]Synthesis[[:space:]]with[[:space:]]Learned-Prior-Conditioned[[:space:]]Flow[[:space:]]Matching/ef88679b-44b4-4bf3-9248-5afe6d9058c0_origin.pdf filter=lfs diff=lfs merge=lfs -text
5064
  2025/ObfusLM_[[:space:]]Privacy-preserving[[:space:]]Language[[:space:]]Model[[:space:]]Service[[:space:]]against[[:space:]]Embedding[[:space:]]Inversion[[:space:]]Attacks/7ff0baf2-fd40-42b0-bd16-e086deebbd95_origin.pdf filter=lfs diff=lfs merge=lfs -text
5065
  2025/Odysseus[[:space:]]Navigates[[:space:]]the[[:space:]]Sirens’[[:space:]]Song_[[:space:]]Dynamic[[:space:]]Focus[[:space:]]Decoding[[:space:]]for[[:space:]]Factual[[:space:]]and[[:space:]]Diverse[[:space:]]Open-Ended[[:space:]]Text[[:space:]]Generation/f59c9adc-c141-4269-a78e-b790ce79ab5f_origin.pdf filter=lfs diff=lfs merge=lfs -text
5066
+ 2023/2_n[[:space:]]is[[:space:]]better[[:space:]]than[[:space:]]n2_[[:space:]]Decomposing[[:space:]]Event[[:space:]]Coreference[[:space:]]Resolution[[:space:]]into[[:space:]]Two[[:space:]]Tractable[[:space:]]Problems/02c65ea8-73c1-4552-88f9-a2d5a7f7c433_origin.pdf filter=lfs diff=lfs merge=lfs -text
5067
+ 2023/A[[:space:]]Benchmark[[:space:]]on[[:space:]]Extremely[[:space:]]Weakly[[:space:]]Supervised[[:space:]]Text[[:space:]]Classification_[[:space:]]Reconcile[[:space:]]Seed[[:space:]]Matching[[:space:]]and[[:space:]]Prompting[[:space:]]Approaches/3cfba471-572f-49ab-a5f2-cb3f967b96d6_origin.pdf filter=lfs diff=lfs merge=lfs -text
5068
+ 2023/A[[:space:]]Call[[:space:]]for[[:space:]]Standardization[[:space:]]and[[:space:]]Validation[[:space:]]of[[:space:]]Text[[:space:]]Style[[:space:]]Transfer[[:space:]]Evaluation/ab706f98-d210-4683-ba07-c1e317949d40_origin.pdf filter=lfs diff=lfs merge=lfs -text
5069
+ 2023/A[[:space:]]Class-Rebalancing[[:space:]]Self-Training[[:space:]]Framework[[:space:]]for[[:space:]]Distantly-Supervised[[:space:]]Named[[:space:]]Entity[[:space:]]Recognition/5d2ab272-62cd-4b5d-82f2-5c5c490fd685_origin.pdf filter=lfs diff=lfs merge=lfs -text
5070
+ 2023/A[[:space:]]Comparative[[:space:]]Analysis[[:space:]]of[[:space:]]the[[:space:]]Effectiveness[[:space:]]of[[:space:]]Rare[[:space:]]Tokens[[:space:]]on[[:space:]]Creative[[:space:]]Expression[[:space:]]using[[:space:]]ramBERT/14b40f42-41c5-437e-b7fd-c53ddc62efec_origin.pdf filter=lfs diff=lfs merge=lfs -text
5071
+ 2023/A[[:space:]]Confidence-based[[:space:]]Partial[[:space:]]Label[[:space:]]Learning[[:space:]]Model[[:space:]]for[[:space:]]Crowd-Annotated[[:space:]]Named[[:space:]]Entity[[:space:]]Recognition/1bad24cc-7495-4411-88f1-9e429d01bb74_origin.pdf filter=lfs diff=lfs merge=lfs -text
5072
+ 2023/A[[:space:]]Customized[[:space:]]Text[[:space:]]Sanitization[[:space:]]Mechanism[[:space:]]with[[:space:]]Differential[[:space:]]Privacy/cecb5923-3636-44eb-8dc9-ff69d69aa2f0_origin.pdf filter=lfs diff=lfs merge=lfs -text
5073
+ 2023/A[[:space:]]Diffusion[[:space:]]Model[[:space:]]for[[:space:]]Event[[:space:]]Skeleton[[:space:]]Generation/98150961-6381-4a26-81e7-35b4d180926d_origin.pdf filter=lfs diff=lfs merge=lfs -text
5074
+ 2023/A[[:space:]]Formal[[:space:]]Perspective[[:space:]]on[[:space:]]Byte-Pair[[:space:]]Encoding/6df33e58-ebbb-41f2-bab6-89afcc9dc353_origin.pdf filter=lfs diff=lfs merge=lfs -text
5075
+ 2023/A[[:space:]]Fused[[:space:]]Gromov-Wasserstein[[:space:]]Framework[[:space:]]for[[:space:]]Unsupervised[[:space:]]Knowledge[[:space:]]Graph[[:space:]]Entity[[:space:]]Alignment/7855e6df-d95a-401a-b4e9-52363c5ddf37_origin.pdf filter=lfs diff=lfs merge=lfs -text
5076
+ 2023/A[[:space:]]Hierarchical[[:space:]]Explanation[[:space:]]Generation[[:space:]]Method[[:space:]]Based[[:space:]]on[[:space:]]Feature[[:space:]]Interaction[[:space:]]Detection/f20dc367-d200-432b-9bf2-412f1aa761c8_origin.pdf filter=lfs diff=lfs merge=lfs -text
5077
+ 2023/A[[:space:]]Language-First[[:space:]]Approach[[:space:]]for[[:space:]]Procedure[[:space:]]Planning/5c8216a4-c14a-44e9-aa0e-0e398996ddeb_origin.pdf filter=lfs diff=lfs merge=lfs -text
5078
+ 2023/A[[:space:]]Match[[:space:]]Made[[:space:]]in[[:space:]]Heaven_[[:space:]]A[[:space:]]Multi-task[[:space:]]Framework[[:space:]]for[[:space:]]Hyperbole[[:space:]]and[[:space:]]Metaphor[[:space:]]Detection/21aa4949-080a-45fa-9eed-fd6a112d35b9_origin.pdf filter=lfs diff=lfs merge=lfs -text
5079
+ 2023/A[[:space:]]Memory[[:space:]]Model[[:space:]]for[[:space:]]Question[[:space:]]Answering[[:space:]]from[[:space:]]Streaming[[:space:]]Data[[:space:]]Supported[[:space:]]by[[:space:]]Rehearsal[[:space:]]and[[:space:]]Anticipation[[:space:]]of[[:space:]]Coreference[[:space:]]Information/5de43f2d-7c4d-42a5-adf4-8f535d0f142f_origin.pdf filter=lfs diff=lfs merge=lfs -text
5080
+ 2023/A[[:space:]]Multi-dimensional[[:space:]]study[[:space:]]on[[:space:]]Bias[[:space:]]in[[:space:]]Vision-Language[[:space:]]models/31a69b52-0dbb-422c-827e-5b90df5ced15_origin.pdf filter=lfs diff=lfs merge=lfs -text
5081
+ 2023/A[[:space:]]Multi-modal[[:space:]]Debiasing[[:space:]]Model[[:space:]]with[[:space:]]Dynamical[[:space:]]Constraint[[:space:]]for[[:space:]]Robust[[:space:]]Visual[[:space:]]Question[[:space:]]Answering/f28e6bd1-f1e6-452d-928e-c10b63162a87_origin.pdf filter=lfs diff=lfs merge=lfs -text
5082
+ 2023/A[[:space:]]Multi-task[[:space:]]Learning[[:space:]]Framework[[:space:]]for[[:space:]]Quality[[:space:]]Estimation/c1f2e63c-48ba-4c3b-878f-f0de768cdce4_origin.pdf filter=lfs diff=lfs merge=lfs -text
5083
+ 2023/A[[:space:]]New[[:space:]]Task[[:space:]]and[[:space:]]Dataset[[:space:]]on[[:space:]]Detecting[[:space:]]Attacks[[:space:]]on[[:space:]]Human[[:space:]]Rights[[:space:]]Defenders/82c12555-511f-4014-9bf9-1fca70e2945a_origin.pdf filter=lfs diff=lfs merge=lfs -text
5084
+ 2023/A[[:space:]]Pilot[[:space:]]Study[[:space:]]on[[:space:]]Dialogue-Level[[:space:]]Dependency[[:space:]]Parsing[[:space:]]for[[:space:]]Chinese/e54941d5-c460-492c-9bd1-39f3392dffa2_origin.pdf filter=lfs diff=lfs merge=lfs -text
5085
+ 2023/A[[:space:]]Robust[[:space:]]Information-Masking[[:space:]]Approach[[:space:]]for[[:space:]]Domain[[:space:]]Counterfactual[[:space:]]Generation/2d5913e9-1af5-4c55-a834-ddb9d5b84d50_origin.pdf filter=lfs diff=lfs merge=lfs -text
5086
+ 2023/A[[:space:]]Self-Supervised[[:space:]]Integration[[:space:]]Method[[:space:]]of[[:space:]]Pretrained[[:space:]]Language[[:space:]]Models[[:space:]]and[[:space:]]Word[[:space:]]Definitions/909e9ea8-9315-4a5e-8d50-944f1a8aac87_origin.pdf filter=lfs diff=lfs merge=lfs -text
5087
+ 2023/A[[:space:]]Semi-Autoregressive[[:space:]]Graph[[:space:]]Generative[[:space:]]Model[[:space:]]for[[:space:]]Dependency[[:space:]]Graph[[:space:]]Parsing/26291e48-5210-401e-b3e3-b04779ccbc0e_origin.pdf filter=lfs diff=lfs merge=lfs -text
5088
+ 2023/A[[:space:]]Sequence-to-Sequence&Set[[:space:]]Model[[:space:]]for[[:space:]]Text-to-Table[[:space:]]Generation/de3b87d8-1ffe-438a-8396-1939554c669e_origin.pdf filter=lfs diff=lfs merge=lfs -text
5089
+ 2023/A[[:space:]]Set[[:space:]]Prediction[[:space:]]Network[[:space:]]For[[:space:]]Extractive[[:space:]]Summarization/28d1d708-41da-41b5-a99d-7e59eb711af0_origin.pdf filter=lfs diff=lfs merge=lfs -text
5090
+ 2023/A[[:space:]]Simple[[:space:]]Yet[[:space:]]Strong[[:space:]]Domain-Agnostic[[:space:]]De-bias[[:space:]]Method[[:space:]]for[[:space:]]Zero-Shot[[:space:]]Sentiment[[:space:]]Classification/1ba285e7-df57-4664-b4d0-15c6b23685b5_origin.pdf filter=lfs diff=lfs merge=lfs -text
5091
+ 2023/A[[:space:]]Simple,[[:space:]]Yet[[:space:]]Effective[[:space:]]Approach[[:space:]]to[[:space:]]Finding[[:space:]]Biases[[:space:]]in[[:space:]]Code[[:space:]]Generation/e8411fc5-0955-4190-beb2-2715805f9198_origin.pdf filter=lfs diff=lfs merge=lfs -text
5092
+ 2023/A[[:space:]]Statistical[[:space:]]Exploration[[:space:]]of[[:space:]]Text[[:space:]]Partition[[:space:]]Into[[:space:]]Constituents_[[:space:]]The[[:space:]]Case[[:space:]]of[[:space:]]the[[:space:]]Priestly[[:space:]]Source[[:space:]]in[[:space:]]the[[:space:]]Books[[:space:]]of[[:space:]]Genesis[[:space:]]and[[:space:]]Exodus/b2743efe-1a32-4d27-8beb-e93e98758370_origin.pdf filter=lfs diff=lfs merge=lfs -text
5093
+ 2023/A[[:space:]]Study[[:space:]]on[[:space:]]Knowledge[[:space:]]Distillation[[:space:]]from[[:space:]]Weak[[:space:]]Teacher[[:space:]]for[[:space:]]Scaling[[:space:]]Up[[:space:]]Pre-trained[[:space:]]Language[[:space:]]Models/98396068-fe66-4c57-847e-795c0da309c4_origin.pdf filter=lfs diff=lfs merge=lfs -text
2023/2_n is better than n2_ Decomposing Event Coreference Resolution into Two Tractable Problems/02c65ea8-73c1-4552-88f9-a2d5a7f7c433_content_list.json ADDED
@@ -0,0 +1,2048 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "type": "text",
4
+ "text": "$2 * n$ is better than $n^2$ : Decomposing Event Coreference Resolution into Two Tractable Problems",
5
+ "text_level": 1,
6
+ "bbox": [
7
+ 129,
8
+ 84,
9
+ 867,
10
+ 121
11
+ ],
12
+ "page_idx": 0
13
+ },
14
+ {
15
+ "type": "text",
16
+ "text": "Shafiuddin Rehan Ahmed<sup>1</sup> Abhijnan Nath<sup>2</sup> James H. Martin<sup>1</sup> Nikhil Krishnaswamy<sup>2</sup>",
17
+ "bbox": [
18
+ 115,
19
+ 131,
20
+ 880,
21
+ 149
22
+ ],
23
+ "page_idx": 0
24
+ },
25
+ {
26
+ "type": "text",
27
+ "text": "$^{1}$ Department of Computer Science, University of Colorado, Boulder, CO, USA {shah7567, james.martin}@colorado.edu",
28
+ "bbox": [
29
+ 181,
30
+ 156,
31
+ 820,
32
+ 191
33
+ ],
34
+ "page_idx": 0
35
+ },
36
+ {
37
+ "type": "text",
38
+ "text": "$^{2}$ Department of Computer Science, Colorado State University, Fort Collins, CO, USA {abhijnan.nath, nkrishna}@colostate.edu",
39
+ "bbox": [
40
+ 152,
41
+ 196,
42
+ 847,
43
+ 231
44
+ ],
45
+ "page_idx": 0
46
+ },
47
+ {
48
+ "type": "text",
49
+ "text": "Abstract",
50
+ "text_level": 1,
51
+ "bbox": [
52
+ 260,
53
+ 252,
54
+ 339,
55
+ 268
56
+ ],
57
+ "page_idx": 0
58
+ },
59
+ {
60
+ "type": "text",
61
+ "text": "Event Coreference Resolution (ECR) is the task of linking mentions of the same event either within or across documents. Most mention pairs are not coreferent, yet many that are coreferent can be identified through simple techniques such as lemma matching of the event triggers or the sentences in which they appear. Existing methods for training coreference systems sample from a largely skewed distribution, making it difficult for the algorithm to learn coreference beyond surface matching. Additionally, these methods are intractable because of the quadratic operations needed. To address these challenges, we break the problem of ECR into two parts: a) a heuristic to efficiently filter out a large number of non-coreferent pairs, and b) a training approach on a balanced set of coreferent and non-coreferent mention pairs. By following this approach, we show that we get comparable results to the state of the art on two popular ECR datasets while significantly reducing compute requirements. We also analyze the mention pairs that are \"hard\" to accurately classify as coreferent or non-coreferent<sup>1</sup>.",
62
+ "bbox": [
63
+ 144,
64
+ 282,
65
+ 460,
66
+ 624
67
+ ],
68
+ "page_idx": 0
69
+ },
70
+ {
71
+ "type": "text",
72
+ "text": "1 Introduction",
73
+ "text_level": 1,
74
+ "bbox": [
75
+ 114,
76
+ 639,
77
+ 258,
78
+ 655
79
+ ],
80
+ "page_idx": 0
81
+ },
82
+ {
83
+ "type": "text",
84
+ "text": "Event coreference resolution (ECR) is the task of finding mentions of the same event within the same document (known as \"within-document coreference resolution,\" or WDCR) or across text (known as \"cross-document coreference resolution,\" or CDCR) documents. This task is used for knowledge graph construction, event salience detection and question answering (Postma et al., 2018).",
85
+ "bbox": [
86
+ 112,
87
+ 667,
88
+ 487,
89
+ 795
90
+ ],
91
+ "page_idx": 0
92
+ },
93
+ {
94
+ "type": "text",
95
+ "text": "Traditionally, ECR is performed on pairs of event mentions by calculating the similarity between them and subsequently using a clustering algorithm to identify ECR relations through transitivity. The pairwise similarity is estimated using a supervised machine learning method, where an",
96
+ "bbox": [
97
+ 112,
98
+ 797,
99
+ 487,
100
+ 892
101
+ ],
102
+ "page_idx": 0
103
+ },
104
+ {
105
+ "type": "text",
106
+ "text": "algorithm is trained to distinguish between positive and negative examples based on ground truth. The positive examples are all pairs of coreferent mentions, while the negative examples are all pairs of non-coreferent mentions. To avoid comparing completely unrelated events, the negative pairs are only selected from documents coming from the set of related topics.",
107
+ "bbox": [
108
+ 507,
109
+ 253,
110
+ 882,
111
+ 381
112
+ ],
113
+ "page_idx": 0
114
+ },
115
+ {
116
+ "type": "text",
117
+ "text": "Many coreferent pairs are similar on the surface, meaning that the event triggers (the words or phrases referring to the event) have the same lemma and appear in similar sentences. We can use these features in a heuristic to further classify the positive $(\\mathrm{P}^{+})$ and negative $(\\mathrm{P}^{-})$ pairs into four categories:",
118
+ "bbox": [
119
+ 507,
120
+ 382,
121
+ 882,
122
+ 494
123
+ ],
124
+ "page_idx": 0
125
+ },
126
+ {
127
+ "type": "list",
128
+ "sub_type": "text",
129
+ "list_items": [
130
+ "1. $\\mathbf{P}_{\\mathrm{easy}}^{+}$ : coreferent/positive mention pairs with high surface similarity.",
131
+ "2. $\\mathrm{P_{FN}^{+}}$ : coreferent/positive mention pairs with low surface similarity.",
132
+ "3. $\\mathbf{P}_{\\mathrm{hard}}^{-}$ : non-coreferent/negative mention pairs with high surface similarity.",
133
+ "4. $\\mathrm{P}_{\\mathrm{TN}}^{-}$ : non-coreferent/negative mention pairs with low surface similarity"
134
+ ],
135
+ "bbox": [
136
+ 522,
137
+ 498,
138
+ 880,
139
+ 638
140
+ ],
141
+ "page_idx": 0
142
+ },
143
+ {
144
+ "type": "text",
145
+ "text": "As shown in Figure 1, $\\mathrm{P}_{\\text {easy }}^{+}$ represents coreferent mention pairs that can be correctly identified by the heuristic, but $\\mathrm{P}_{\\text {hard }}^{-}$ are non-coreferent pairs that might be difficult for the heuristic to identify. Similarly, $\\mathrm{P}_{\\mathrm{TN}}^{-}$ (True Negatives) are non-coreferent pairs that the heuristic can correctly infer, but $\\mathrm{P}_{\\mathrm{FN}}^{+}$ (False Negatives) require additional reasoning (that Indianapolis is coreferent with Colts) to make the coreference judgement.",
146
+ "bbox": [
147
+ 507,
148
+ 643,
149
+ 882,
150
+ 788
151
+ ],
152
+ "page_idx": 0
153
+ },
154
+ {
155
+ "type": "text",
156
+ "text": "Most mention pairs are non-coreferent, comprising all pairs corresponding to $\\mathrm{P}_{\\mathrm{hard}}^{-}$ and $\\mathrm{P}_{\\mathrm{TN}}^{-}$ . However, we observe that the distribution of the three categories $(\\mathrm{P}_{\\mathrm{easy}}^{+}, \\mathrm{P}_{\\mathrm{hard}}^{-}$ , and $\\mathrm{P}_{\\mathrm{FN}}^{+})$ is fairly similar across most ECR datasets, with $\\mathrm{P}_{\\mathrm{TN}}^{-}$ causing the imbalance between positive and negative pairs. Previous methods do not differentiate between these four categories and randomly select",
157
+ "bbox": [
158
+ 507,
159
+ 789,
160
+ 884,
161
+ 917
162
+ ],
163
+ "page_idx": 0
164
+ },
165
+ {
166
+ "type": "page_footnote",
167
+ "text": "code repo: github.com/ahmeshaf/lemma_ce_coref",
168
+ "bbox": [
169
+ 134,
170
+ 903,
171
+ 478,
172
+ 917
173
+ ],
174
+ "page_idx": 0
175
+ },
176
+ {
177
+ "type": "image",
178
+ "img_path": "images/9e0ff249f502983afe08e8f0e2b096d020cc1d1ed3aad0417037fb775a72c394.jpg",
179
+ "image_caption": [
180
+ "Figure 1: In this approach, we use a lemma-based heuristic to identify coreference, or the relationship between two mentions in a text that refer to the same event. We compare the similarity between the event trigger, which is highlighted in bold and italic, and the lemmas, or base forms, of the sentences. The heuristic classifies the mention pairs \"P $_{\\text{easy}}^+$ \" and \"P $_{\\text{hard}}^-$ \" as coreferent, and \"P $_{\\text{FN}}^+$ \" and \"P $_{\\text{TN}}^-$ \" as not coreferent. \"P $_{\\text{easy}}^+$ \" and \"P $_{\\text{TN}}^-$ \" are correct predictions, meaning they are classified correctly as coreferent and not coreferent. \"P $_{\\text{hard}}^-$ \" and \"P $_{\\text{FN}}^+$ \" are incorrect predictions, meaning they are misclassified as coreferent and not coreferent."
181
+ ],
182
+ "image_footnote": [],
183
+ "bbox": [
184
+ 115,
185
+ 83,
186
+ 884,
187
+ 211
188
+ ],
189
+ "page_idx": 1
190
+ },
191
+ {
192
+ "type": "text",
193
+ "text": "the positive and negative pairs to train their coreference systems from this heavily skewed distribution. This makes it challenging for the coreference algorithm to identify coreferent links among a large number of non-coreferent ones. Furthermore, as ECR is performed on $n^2$ number of mention pairs, where $n$ is the number of mentions in the corpus, these methods can become intractable for a large corpus.",
194
+ "bbox": [
195
+ 110,
196
+ 338,
197
+ 487,
198
+ 482
199
+ ],
200
+ "page_idx": 1
201
+ },
202
+ {
203
+ "type": "text",
204
+ "text": "To improve the efficiency of the ECR process while achieving near sate of the art (SOTA) results, we divide the problem into two manageable subtasks: a) a heuristic to efficiently and accurately filter out a large number of $\\mathrm{P}_{\\mathrm{TN}}^{-}$ as a way of balancing the skewed distribution, and b) an ECR system trained on the balanced set of coreferent and noncoreferent mention pairs $(\\mathrm{P}_{\\mathrm{easy}}^{+}$ and $\\mathrm{P}_{\\mathrm{hard}}^{-}$ . This approach also eases the analysis of some of the mention pairs that are difficult to classify with an ECR system, which we present in this paper.",
205
+ "bbox": [
206
+ 112,
207
+ 483,
208
+ 489,
209
+ 661
210
+ ],
211
+ "page_idx": 1
212
+ },
213
+ {
214
+ "type": "text",
215
+ "text": "2 Related Work",
216
+ "text_level": 1,
217
+ "bbox": [
218
+ 112,
219
+ 671,
220
+ 270,
221
+ 686
222
+ ],
223
+ "page_idx": 1
224
+ },
225
+ {
226
+ "type": "text",
227
+ "text": "Pre-Transformer Methods Pre-Transformer language model-related works in event coreference such as Kenyon-Dean et al. (2018) trained neural models with customized objective (loss) functions to generate richer representations of mentionpairs using \"static\" embeddings such as contextual Word2Vec (Mikolov et al., 2013) as well as document-level features such as TF-IDF and heuristically-motivated features like mentionrecency, word overlap, and lemma overlap, etc. As such, they improved upon the baselines established by Cybulska and Vossen (2015) on the $\\mathrm{ECB + }$ corpus. Similarly, works such as Barhom et al. (2019) suggest both disjoint and joint-clustering of events",
228
+ "bbox": [
229
+ 112,
230
+ 694,
231
+ 489,
232
+ 919
233
+ ],
234
+ "page_idx": 1
235
+ },
236
+ {
237
+ "type": "text",
238
+ "text": "mentions with their related entity clusters by using a predicate-argument structure. In this, their disjoint model surpassed Kenyon-Dean et al. (2018) by 9.5 F1 points using the CoNLL scorer (Pradhan et al., 2014) whereas their joint model improved upon the disjoint model by 1.2 points for entities and 1 point for events.",
239
+ "bbox": [
240
+ 507,
241
+ 338,
242
+ 884,
243
+ 450
244
+ ],
245
+ "page_idx": 1
246
+ },
247
+ {
248
+ "type": "text",
249
+ "text": "Transformer-based Cross-encoding Most recent works (Meged et al., 2020; Zeng et al., 2020; Cattan et al., 2021; Allaway et al., 2021; Caciularu et al., 2021; Held et al., 2021; Yu et al., 2022a) in CDCR have shown success in using pairwise mention representation learning models, a method popularly known as cross-encoding. These methods use distributed and contextually-enriched \"nonstatic\" vector representations of mentions from large, Transformer-based language models like various BERT-variants to calculate supervised pairwise scores for those event mentions. At inference, such works use variations of incremental or agglomerative clustering techniques to form predicted coreference links and evaluate their chains on gold coreference standards. The methods vary with the context they use for cross-encoding. Cattan et al. (2021) use only sentence-level context, Held et al. (2021) use context from sentences surrounding the mentions, and Caciularu et al. (2021) use context from entire documents.",
250
+ "bbox": [
251
+ 507,
252
+ 451,
253
+ 884,
254
+ 788
255
+ ],
256
+ "page_idx": 1
257
+ },
258
+ {
259
+ "type": "text",
260
+ "text": "In our research, we have focused on the CDLM model from Caciularu et al. (2021) and their methodology, which uses a combination of enhanced pretraining using the global attention mechanism inspired by Beltagy et al. (2020) as well as finetuning on a task-specific dataset using pretrained special tokens to generate more semantically-enhanced embeddings for mentions.",
261
+ "bbox": [
262
+ 507,
263
+ 790,
264
+ 885,
265
+ 919
266
+ ],
267
+ "page_idx": 1
268
+ },
269
+ {
270
+ "type": "text",
271
+ "text": "Beltagy et al. (2020) and Caciularu et al. (2021) cleverly use the global attention mechanism to linearly scale the oft-quadratic complexity of pairwise scoring of mentions in coreference resolution while also accommodating longer documents (up to 4,096 tokens). Previous works such as Baldwin (1997), Stoyanov and Eisner (2012), Lee et al. (2012), and Lee et al. (2013) also reduce computation time by strategically using deterministic, rule-based systems along with neural architectures.",
272
+ "bbox": [
273
+ 112,
274
+ 84,
275
+ 489,
276
+ 243
277
+ ],
278
+ "page_idx": 2
279
+ },
280
+ {
281
+ "type": "text",
282
+ "text": "Recently, pruning $\\mathrm{P_{TN}^{-}}$ for ECR has been shown to be effective by Held et al. (2021). They create individual representations for mentions and use them in a bi-encoder method to retrieve potential coreferent candidates, which are later refined using a cross-encoder trained on hard negative examples. In contrast, our approach utilizes a computationally efficient pruning heuristic and trains the cross-encoder on a smaller dataset. We also conduct an error analysis on all hard examples that are misclassified by the cross-encoder, which is made feasible by the heuristic.",
283
+ "bbox": [
284
+ 115,
285
+ 244,
286
+ 489,
287
+ 437
288
+ ],
289
+ "page_idx": 2
290
+ },
291
+ {
292
+ "type": "text",
293
+ "text": "3 Datasets",
294
+ "text_level": 1,
295
+ "bbox": [
296
+ 112,
297
+ 448,
298
+ 223,
299
+ 462
300
+ ],
301
+ "page_idx": 2
302
+ },
303
+ {
304
+ "type": "text",
305
+ "text": "We experiment with two popular ECR datasets distinguished by the effectiveness of a lemma heuristic on the dataset.",
306
+ "bbox": [
307
+ 112,
308
+ 470,
309
+ 489,
310
+ 518
311
+ ],
312
+ "page_idx": 2
313
+ },
314
+ {
315
+ "type": "text",
316
+ "text": "3.1 Event Coreference Bank Plus (ECB+)",
317
+ "text_level": 1,
318
+ "bbox": [
319
+ 112,
320
+ 530,
321
+ 455,
322
+ 544
323
+ ],
324
+ "page_idx": 2
325
+ },
326
+ {
327
+ "type": "text",
328
+ "text": "The $\\mathrm{ECB + }$ corpus (Cybulska and Vossen, 2014) is a popular English corpus used to train and evaluate systems for event coreference resolution. It extends the Event Coref Bank corpus (ECB; Bejan and Harabagiu (2010)), with annotations from around 500 additional documents. The corpus includes annotations of text spans that represent events, as well as information about how those events are related through coreference. We divide the documents from topics 1 to 35 into the training and validation sets $^2$ , and those from 36 to 45 into the test set, following the approach of Cybulska and Vossen (2015).",
329
+ "bbox": [
330
+ 112,
331
+ 548,
332
+ 489,
333
+ 757
334
+ ],
335
+ "page_idx": 2
336
+ },
337
+ {
338
+ "type": "text",
339
+ "text": "3.2 Gun Violence Corpus (GVC)",
340
+ "text_level": 1,
341
+ "bbox": [
342
+ 112,
343
+ 768,
344
+ 386,
345
+ 783
346
+ ],
347
+ "page_idx": 2
348
+ },
349
+ {
350
+ "type": "text",
351
+ "text": "The Gun Violence Corpus (Vossen et al., 2018) is a recent English corpus exclusively focusing on event coreference resolution. It is intended to be a more challenging dataset than $\\mathrm{ECB + }$ which has a very strong lemma baseline (Cybulska and Vossen, 2014). It is a collection of texts surrounding a",
352
+ "bbox": [
353
+ 112,
354
+ 788,
355
+ 489,
356
+ 885
357
+ ],
358
+ "page_idx": 2
359
+ },
360
+ {
361
+ "type": "table",
362
+ "img_path": "images/b12a3bfb10827008cd3d26c2834a2076540be29b7960cafe3aa611d5895a7367.jpg",
363
+ "table_caption": [],
364
+ "table_footnote": [
365
+ "Table 1: ECB+ and GVC Corpus statistics for event mentions. T/ST = topics/sub-topics, D = documents, M = event mentions, C = clusters, S = singletons."
366
+ ],
367
+ "table_body": "<table><tr><td rowspan=\"2\"></td><td colspan=\"3\">ECB+</td><td colspan=\"3\">GVC</td></tr><tr><td>Train</td><td>Dev</td><td>Test</td><td>Train</td><td>Dev</td><td>Test</td></tr><tr><td>T/ST</td><td>25</td><td>8</td><td>10/20</td><td>1/170</td><td>1/37</td><td>1/34</td></tr><tr><td>D</td><td>594</td><td>196</td><td>206</td><td>358</td><td>78</td><td>74</td></tr><tr><td>M</td><td>3808</td><td>1245</td><td>1780</td><td>5313</td><td>977</td><td>1008</td></tr><tr><td>C</td><td>1464</td><td>409</td><td>805</td><td>991</td><td>228</td><td>194</td></tr><tr><td>S</td><td>1053</td><td>280</td><td>623</td><td>252</td><td>70</td><td>43</td></tr></table>",
368
+ "bbox": [
369
+ 512,
370
+ 80,
371
+ 882,
372
+ 186
373
+ ],
374
+ "page_idx": 2
375
+ },
376
+ {
377
+ "type": "text",
378
+ "text": "single topic (gun violence) and various sub-topics. Since it does not have coreference links across sub-topics, we only consider mention pairs within the sub-topics. We use the data split by Bugert et al. (2021). Table 1 contains the statistics for $\\mathrm{ECB + }$ and GVC corpora.",
379
+ "bbox": [
380
+ 507,
381
+ 253,
382
+ 884,
383
+ 350
384
+ ],
385
+ "page_idx": 2
386
+ },
387
+ {
388
+ "type": "text",
389
+ "text": "4 System Overview",
390
+ "text_level": 1,
391
+ "bbox": [
392
+ 507,
393
+ 363,
394
+ 694,
395
+ 379
396
+ ],
397
+ "page_idx": 2
398
+ },
399
+ {
400
+ "type": "text",
401
+ "text": "There are two major components in our system: the heuristic and the discriminator (cross-encoder) trained on the output of the heuristic.",
402
+ "bbox": [
403
+ 507,
404
+ 384,
405
+ 882,
406
+ 431
407
+ ],
408
+ "page_idx": 2
409
+ },
410
+ {
411
+ "type": "text",
412
+ "text": "4.1 Lemma Heuristics (LH, LHOra)",
413
+ "text_level": 1,
414
+ "bbox": [
415
+ 507,
416
+ 441,
417
+ 794,
418
+ 456
419
+ ],
420
+ "page_idx": 2
421
+ },
422
+ {
423
+ "type": "text",
424
+ "text": "A key feature of ECR is its high baseline achieved by comparing the lemmas of mention triggers and sentences. To leverage this feature, we incorporate it as the first step in our coreference resolution system. We utilize $\\mathsf{spaCy}^3$ to extract the lemmas, a widely-used tool for this task. In addition to matching lemmas of triggers, we also create and utilize a set of synonymous<sup>4</sup> lemma pairs that commonly appear in coreferent mention pairs in our training set. This approach allows us to identify coreferent mention pairs that have different triggers and improve the overall recall. The heuristic, LH, only utilizes the synonymous lemma pairs from the training set. We also evaluate the performance of $\\mathsf{LH}_{\\mathsf{Ora}}$ , which uses synonymous lemma pairs from the entire dataset which means it uses the coreference information of the development and test sets to create synonymous lemma pairs.",
425
+ "bbox": [
426
+ 505,
427
+ 462,
428
+ 882,
429
+ 751
430
+ ],
431
+ "page_idx": 2
432
+ },
433
+ {
434
+ "type": "text",
435
+ "text": "For a mention pair (A, B), with triggers $(t_A, t_B)$ , head lemmas $(l_A, l_B)$ and for a given synonymous lemma pair set $(\\mathrm{Syn}_{\\mathbb{P}})$ , we consider only lemma pairs that pass any of the following rules:",
436
+ "bbox": [
437
+ 507,
438
+ 753,
439
+ 882,
440
+ 816
441
+ ],
442
+ "page_idx": 2
443
+ },
444
+ {
445
+ "type": "list",
446
+ "sub_type": "text",
447
+ "list_items": [
448
+ "- $(l_A, l_B) \\in \\mathrm{Syn}_{\\mathbb{P}}$",
449
+ "$l_{A} = l_{B}$",
450
+ "- $t_B$ contains $l_A$"
451
+ ],
452
+ "bbox": [
453
+ 529,
454
+ 821,
455
+ 668,
456
+ 875
457
+ ],
458
+ "page_idx": 2
459
+ },
460
+ {
461
+ "type": "text",
462
+ "text": "3https://spacy.io/ model en_core_web_md v3.4",
463
+ "bbox": [
464
+ 529,
465
+ 879,
466
+ 838,
467
+ 892
468
+ ],
469
+ "page_idx": 2
470
+ },
471
+ {
472
+ "type": "text",
473
+ "text": "4The words need not be synonyms in strict definitions, but rather appear in coreference chains.",
474
+ "bbox": [
475
+ 507,
476
+ 892,
477
+ 880,
478
+ 917
479
+ ],
480
+ "page_idx": 2
481
+ },
482
+ {
483
+ "type": "page_footnote",
484
+ "text": "2Validation set includes documents from the topics 2, 5, 12, 18, 21, 34, and 35",
485
+ "bbox": [
486
+ 112,
487
+ 891,
488
+ 489,
489
+ 917
490
+ ],
491
+ "page_idx": 2
492
+ },
493
+ {
494
+ "type": "image",
495
+ "img_path": "images/9878c5dd41e62deddda9a6f71cf8b3aa7ee051d28beed08fb94183eedcd6ee49.jpg",
496
+ "image_caption": [
497
+ "Figure 2: Coreferent vs. non-coreferent mention pairs ratio across datasets."
498
+ ],
499
+ "image_footnote": [],
500
+ "bbox": [
501
+ 115,
502
+ 85,
503
+ 485,
504
+ 239
505
+ ],
506
+ "page_idx": 3
507
+ },
508
+ {
509
+ "type": "text",
510
+ "text": "$t_A$ contains $l_B$",
511
+ "text_level": 1,
512
+ "bbox": [
513
+ 134,
514
+ 306,
515
+ 272,
516
+ 321
517
+ ],
518
+ "page_idx": 3
519
+ },
520
+ {
521
+ "type": "text",
522
+ "text": "For mentions that have matching trigger lemmas/triggers or are synonymous, we proceed by comparing the context of the mentions. In this work, we only compare the mention's sentence to check for similarities between two mentions. To further refine our comparison, we remove stop words and convert the tokens in the text to their base form. Then, we determine the overlap between the two mentions and predict that the pair is coreferent if the overlap exceeds a certain threshold. We tune the threshold using the development sets.",
523
+ "bbox": [
524
+ 112,
525
+ 330,
526
+ 489,
527
+ 521
528
+ ],
529
+ "page_idx": 3
530
+ },
531
+ {
532
+ "type": "text",
533
+ "text": "4.1.1 Filtering out $\\mathbf{P}_{\\mathrm{TN}}^{\\mathrm{r}}$",
534
+ "text_level": 1,
535
+ "bbox": [
536
+ 112,
537
+ 531,
538
+ 302,
539
+ 548
540
+ ],
541
+ "page_idx": 3
542
+ },
543
+ {
544
+ "type": "text",
545
+ "text": "Cross-document coreference systems often struggle with a skewed distribution of mention pairs, as seen in Figure 2. In any dataset, only $5 - 10\\%$ of the pairs are corefering, while the remaining $90\\%$ are non-coreferent. To address this, we use the heuristic to balance the distribution by selectively removing non-coreferent pairs $\\left(\\mathrm{P}_{\\mathrm{TN}}^{-}\\right)$ , while minimizing the loss of coreferent pairs $\\left(\\mathrm{P}_{\\mathrm{FN}}^{+}\\right)$ . We do this by only considering the mention pairs that the heuristic predicts as coreferent, and discarding the non-coreferent ones.",
546
+ "bbox": [
547
+ 112,
548
+ 551,
549
+ 489,
550
+ 726
551
+ ],
552
+ "page_idx": 3
553
+ },
554
+ {
555
+ "type": "text",
556
+ "text": "4.1.2 $\\mathbf{P}_{\\mathrm{hard}}^{-},\\mathbf{P}_{\\mathrm{easy}}^{+}$ and $\\mathbf{P}_{\\mathrm{FN}}^{+}$ Analysis",
557
+ "text_level": 1,
558
+ "bbox": [
559
+ 112,
560
+ 734,
561
+ 406,
562
+ 753
563
+ ],
564
+ "page_idx": 3
565
+ },
566
+ {
567
+ "type": "text",
568
+ "text": "$\\mathbf{P}_{\\mathrm{easy}}^{+}$ and $\\mathbf{P}_{\\mathrm{hard}}^{-}$ : As defined earlier, $\\mathbf{P}_{\\mathrm{easy}}^{+}$ are the mention pairs that the heuristic correctly predicts as coreferent when compared to the ground-truth, and $\\mathbf{P}_{\\mathrm{hard}}^{-}$ are the heuristic's predictions of coreference that are incorrect when compared to the ground-truth. In §4.2.1, we go through how we fix heuristic's $\\mathbf{P}_{\\mathrm{hard}}^{-}$ predictions while minimizing the errors introduced in terms of $\\mathbf{P}_{\\mathrm{easy}}^{+}$ .",
569
+ "bbox": [
570
+ 112,
571
+ 756,
572
+ 487,
573
+ 885
574
+ ],
575
+ "page_idx": 3
576
+ },
577
+ {
578
+ "type": "text",
579
+ "text": "$\\mathbf{P}_{\\mathrm{FN}}^{+}$ : We define a pair as a $\\mathrm{P}_{\\mathrm{FN}}^{+}$ only if it cannot be linked to the true cluster through subsequent steps.",
580
+ "bbox": [
581
+ 112,
582
+ 887,
583
+ 489,
584
+ 919
585
+ ],
586
+ "page_idx": 3
587
+ },
588
+ {
589
+ "type": "image",
590
+ "img_path": "images/044e65281cd9e35d29dd2b480b601b1d42b22fbc62d2ba4b4c16dd883c34761a.jpg",
591
+ "image_caption": [
592
+ "Figure 3: Counting size of mention pairs $\\mathrm{(P_{FN}^{+}}$ and $\\mathrm{P}_{\\mathrm{easy}}^{+}$ in a true cluster $\\{\\mathbf{a},\\mathbf{b},\\mathbf{c}\\}$ using heuristic's coreferent predictions (solid line) and non-coreferent predictions (dotted line). We count $\\mathrm{P_{FN}^{+}}$ after performing transitive closure, resulting in a size of 0 (instead of 1) in (2)."
593
+ ],
594
+ "image_footnote": [],
595
+ "bbox": [
596
+ 581,
597
+ 80,
598
+ 776,
599
+ 204
600
+ ],
601
+ "page_idx": 3
602
+ },
603
+ {
604
+ "type": "text",
605
+ "text": "As shown in Figure 3, if a true cluster is $\\{a, b, c\\}$ and the heuristic discards one pair $(a, c)$ , it will not be considered as a $\\mathrm{P}_{\\mathrm{FN}}^{+}$ because the coreference can be inferred through transitivity. However, if it discards two pairs $\\{(a, c), (b, c)\\}$ , they will both be considered as $\\mathrm{P}_{\\mathrm{FN}}^{+}$ . We hypothesize that an ideal heuristic is one that maintains a balance between $\\mathrm{P}_{\\mathrm{easy}}^{+}$ and $\\mathrm{P}_{\\mathrm{hard}}^{-}$ while minimizing $\\mathrm{P}_{\\mathrm{FN}}^{+}$ , and therefore, we tune the heuristic's threshold accordingly using the development sets of the corpora.",
606
+ "bbox": [
607
+ 507,
608
+ 311,
609
+ 884,
610
+ 472
611
+ ],
612
+ "page_idx": 3
613
+ },
614
+ {
615
+ "type": "text",
616
+ "text": "We evaluate the heuristics LH and $\\mathrm{LH}_{\\mathrm{Ora}}$ by plotting the distributions $\\mathrm{P}_{\\mathrm{easy}}^{+}$ , $\\mathrm{P}_{\\mathrm{hard}}^{-}$ , and $\\mathrm{P}_{\\mathrm{FN}}^{+}$ generated by each for the two corpora. From Figure 4, We observe similar distributions for the test and development sets with the chosen threshold value from the development set. We also observe that LH causes a significant number of $\\mathrm{P}_{\\mathrm{FN}}^{+}$ , while $\\mathrm{LH}_{\\mathrm{Ora}}$ has a minimal number of $\\mathrm{P}_{\\mathrm{FN}}^{+}$ . Minimizing the count of $\\mathrm{P}_{\\mathrm{FN}}^{+}$ is important as it directly affects",
617
+ "bbox": [
618
+ 507,
619
+ 473,
620
+ 882,
621
+ 618
622
+ ],
623
+ "page_idx": 3
624
+ },
625
+ {
626
+ "type": "image",
627
+ "img_path": "images/c5e006ebd082aa3d824fe189830e8d7ca2cf22c9bae0159c388c7bcba48a0457.jpg",
628
+ "image_caption": [
629
+ "Figure 4: LH and $\\mathsf{LH}_{\\mathsf{Ora}}$ Distributions of $\\mathrm{P}_{\\mathrm{hard}}^{-}$ , $\\mathrm{P}_{\\mathrm{easy}}^{+}$ and $\\mathrm{P_{FN}^{+}}$ for ECB+ and GVC corpora. $\\mathsf{LH}_{\\mathsf{Ora}}$ ensures no (or negligible) loss in $\\mathbf{P}_{\\mathbf{FN}}^{+}$ ."
630
+ ],
631
+ "image_footnote": [],
632
+ "bbox": [
633
+ 531,
634
+ 643,
635
+ 863,
636
+ 862
637
+ ],
638
+ "page_idx": 3
639
+ },
640
+ {
641
+ "type": "image",
642
+ "img_path": "images/7ebf31c5a73012f5696cba402b3718344b806a02f81d5562ce4be313a8402fb4.jpg",
643
+ "image_caption": [
644
+ "Figure 5: The cross-encoding technique to generate the coreference score between the mention pair (A, B). This involves adding special tokens, $\\langle m \\rangle$ and $\\langle \\langle m \\rangle \\rangle$ , around the event triggers, and then combining and processing the two mentions through a transformer-based language model. Certain outputs of the transformer (ECLS, EA, EB) are then concatenated and fed into a classifier, which produces a score between 0 and 1 indicating the degree of coreference between the two mentions."
645
+ ],
646
+ "image_footnote": [],
647
+ "bbox": [
648
+ 117,
649
+ 83,
650
+ 484,
651
+ 313
652
+ ],
653
+ "page_idx": 4
654
+ },
655
+ {
656
+ "type": "text",
657
+ "text": "the system's recall. The distributions of $\\mathrm{P}_{\\mathrm{easy}}^{+}$ and $\\mathrm{P}_{\\mathrm{hard}}^{-}$ remain balanced across all datasets except when $\\mathrm{LH}_{\\mathrm{Ora}}$ is used in GVC where there are double the number of $\\mathrm{P}_{\\mathrm{hard}}^{-}$ to $\\mathrm{P}_{\\mathrm{easy}}^{+}$ . $\\mathrm{P}_{\\mathrm{hard}}^{-}$ should be minimized as it can affect the system's overall precision.",
658
+ "bbox": [
659
+ 112,
660
+ 488,
661
+ 489,
662
+ 571
663
+ ],
664
+ "page_idx": 4
665
+ },
666
+ {
667
+ "type": "text",
668
+ "text": "4.2 Cross-Encoder",
669
+ "text_level": 1,
670
+ "bbox": [
671
+ 112,
672
+ 606,
673
+ 278,
674
+ 620
675
+ ],
676
+ "page_idx": 4
677
+ },
678
+ {
679
+ "type": "text",
680
+ "text": "A common technique to perform ECR is to use Transformer-based cross-encoding (CE) on the mention pair (A, B). This process, depicted in Figure 5, begins by surrounding the trigger with special tokens $(<m>$ and $</m>)$ . The mentions are then combined into a single input for the transformer (e.g., RoBERTa). The pooled output of the transformer (ECLS) and the output corresponding to the tokens of the event triggers $(\\mathrm{E}_{\\mathrm{A}}$ and $\\mathrm{E_B}$ ) are extracted. $^5\\mathrm{E_{CLS}}$ , $\\mathrm{E_A}$ , $\\mathrm{E_B}$ , and the element-wise product of the mention embeddings $(\\mathrm{E}_{\\mathrm{A}}\\odot \\mathrm{E}_{\\mathrm{B}})$ are all concatenated to create a unified representation of the mention pair. This representation is used, with a classifier, to learn the coreference score, CE (A, B), between the pair after finetuning the transformer.",
681
+ "bbox": [
682
+ 112,
683
+ 632,
684
+ 489,
685
+ 873
686
+ ],
687
+ "page_idx": 4
688
+ },
689
+ {
690
+ "type": "text",
691
+ "text": "4.2.1 $\\mathbf{P}_{\\mathrm{easy}}^{+}$ & $\\mathbf{P}_{\\mathrm{hard}}^{-}$ Discriminator (D)",
692
+ "text_level": 1,
693
+ "bbox": [
694
+ 507,
695
+ 84,
696
+ 816,
697
+ 102
698
+ ],
699
+ "page_idx": 4
700
+ },
701
+ {
702
+ "type": "text",
703
+ "text": "The cross-encoder's encoding is non-symmetric, meaning, depending on the order in which the mentions are concatenated, it will give different coreference scores. In reality, the order should not matter for predicting if the two events are the same or not. We propose a symmetric cross-encoding scorer where we take the average of the scores predicted from both combinations of concatenation. So for a mention pair, $p = (\\mathrm{A},\\mathrm{B})$ , the symmetric cross-encoder coreference scorer (D) is given as:",
704
+ "bbox": [
705
+ 507,
706
+ 104,
707
+ 884,
708
+ 263
709
+ ],
710
+ "page_idx": 4
711
+ },
712
+ {
713
+ "type": "equation",
714
+ "text": "\n$$\n\\mathrm {D} (p) = \\frac {\\mathrm {C E} (\\mathrm {A} , \\mathrm {B}) + \\mathrm {C E} (\\mathrm {B} , \\mathrm {A})}{2} \\tag {1}\n$$\n",
715
+ "text_format": "latex",
716
+ "bbox": [
717
+ 579,
718
+ 275,
719
+ 882,
720
+ 306
721
+ ],
722
+ "page_idx": 4
723
+ },
724
+ {
725
+ "type": "text",
726
+ "text": "We employ a cross-encoder with a symmetric scorer, as outlined in Equation 1, as the discriminator for $\\mathrm{P}_{\\mathrm{easy}}^{+}$ and $\\mathrm{P}_{\\mathrm{hard}}^{-}$ . We conduct experiments utilizing two different Transformer models, RoBERTa (Dsmall) and Longformer (Dlong), which vary in their maximum input capacity.",
727
+ "bbox": [
728
+ 507,
729
+ 311,
730
+ 882,
731
+ 407
732
+ ],
733
+ "page_idx": 4
734
+ },
735
+ {
736
+ "type": "text",
737
+ "text": "5 Experimental Setup",
738
+ "text_level": 1,
739
+ "bbox": [
740
+ 507,
741
+ 419,
742
+ 717,
743
+ 435
744
+ ],
745
+ "page_idx": 4
746
+ },
747
+ {
748
+ "type": "text",
749
+ "text": "We describe our process of training, prediction, and hyperparameter choice in this section.",
750
+ "bbox": [
751
+ 507,
752
+ 445,
753
+ 880,
754
+ 476
755
+ ],
756
+ "page_idx": 4
757
+ },
758
+ {
759
+ "type": "text",
760
+ "text": "5.1 Mention Pair Generation",
761
+ "text_level": 1,
762
+ "bbox": [
763
+ 507,
764
+ 488,
765
+ 754,
766
+ 502
767
+ ],
768
+ "page_idx": 4
769
+ },
770
+ {
771
+ "type": "text",
772
+ "text": "We use the gold mentions from the datasets. Following previous methods, we generate all the pairs $(\\mathrm{P_{all}})$ of mentions $(M^v)$ from documents coming from the same topic. We use gold topics in the training phase and predicted topics through document clustering in the prediction phase (Bugert et al., 2021).",
773
+ "bbox": [
774
+ 507,
775
+ 508,
776
+ 882,
777
+ 620
778
+ ],
779
+ "page_idx": 4
780
+ },
781
+ {
782
+ "type": "text",
783
+ "text": "5.2 Training Phase",
784
+ "text_level": 1,
785
+ "bbox": [
786
+ 507,
787
+ 632,
788
+ 675,
789
+ 646
790
+ ],
791
+ "page_idx": 4
792
+ },
793
+ {
794
+ "type": "text",
795
+ "text": "During the training phase, we leverage LH to generate a balanced set of positive and negative samples, labeled as $\\mathrm{P}_{\\mathrm{easy}}^{+}$ and $\\mathrm{P}_{\\mathrm{hard}}^{-}$ , respectively. These samples are then used to train our models, $\\mathrm{D}_{\\mathrm{small}}$ and $\\mathrm{D}_{\\mathrm{long}}$ separately, using the Binary Cross Entropy Loss (BCE) function as follows:",
796
+ "bbox": [
797
+ 507,
798
+ 653,
799
+ 882,
800
+ 749
801
+ ],
802
+ "page_idx": 4
803
+ },
804
+ {
805
+ "type": "equation",
806
+ "text": "\n$$\nL = \\sum_{\\substack{p_{+}\\in \\mathrm{P}_{\\text{easy}}^{+},\\\\ p_{-}\\in \\mathrm{P}_{\\text{hard}}^{-}}}\\log \\mathrm{D}(p_{+}) + \\log \\left(1 - \\mathrm{D}(p_{-})\\right)\n$$\n",
807
+ "text_format": "latex",
808
+ "bbox": [
809
+ 532,
810
+ 760,
811
+ 857,
812
+ 812
813
+ ],
814
+ "page_idx": 4
815
+ },
816
+ {
817
+ "type": "text",
818
+ "text": "Unlike traditional methods, we do not rely on random sampling or artificial balancing of the dataset. Instead, our heuristic ensures that the positive and negative samples are naturally balanced (as depicted in Figure 6). A side-effect of adopting this approach is that some of the positive samples are",
819
+ "bbox": [
820
+ 507,
821
+ 822,
822
+ 882,
823
+ 919
824
+ ],
825
+ "page_idx": 4
826
+ },
827
+ {
828
+ "type": "page_footnote",
829
+ "text": "${}^{5}\\mathrm{E}_{\\mathrm{A}}$ and $\\mathrm{E_B}$ represent the sum of the output embedding of each token for event triggers with multiple tokens.",
830
+ "bbox": [
831
+ 112,
832
+ 891,
833
+ 487,
834
+ 917
835
+ ],
836
+ "page_idx": 4
837
+ },
838
+ {
839
+ "type": "text",
840
+ "text": "Algorithm 1 Training Phase",
841
+ "text_level": 1,
842
+ "bbox": [
843
+ 115,
844
+ 84,
845
+ 329,
846
+ 99
847
+ ],
848
+ "page_idx": 5
849
+ },
850
+ {
851
+ "type": "text",
852
+ "text": "Require: $D$ : training document set",
853
+ "bbox": [
854
+ 115,
855
+ 103,
856
+ 383,
857
+ 118
858
+ ],
859
+ "page_idx": 5
860
+ },
861
+ {
862
+ "type": "text",
863
+ "text": "$T$ : gold topics",
864
+ "bbox": [
865
+ 134,
866
+ 120,
867
+ 242,
868
+ 134
869
+ ],
870
+ "page_idx": 5
871
+ },
872
+ {
873
+ "type": "text",
874
+ "text": "$M^v$ : gold event mentions in $D$",
875
+ "bbox": [
876
+ 134,
877
+ 136,
878
+ 363,
879
+ 149
880
+ ],
881
+ "page_idx": 5
882
+ },
883
+ {
884
+ "type": "text",
885
+ "text": "$S^v$ : sentences of the mentions",
886
+ "bbox": [
887
+ 134,
888
+ 152,
889
+ 357,
890
+ 166
891
+ ],
892
+ "page_idx": 5
893
+ },
894
+ {
895
+ "type": "text",
896
+ "text": "$D^v$ : documents of the mentions",
897
+ "bbox": [
898
+ 134,
899
+ 168,
900
+ 369,
901
+ 181
902
+ ],
903
+ "page_idx": 5
904
+ },
905
+ {
906
+ "type": "text",
907
+ "text": "$G$ : gold mention cluster map",
908
+ "bbox": [
909
+ 134,
910
+ 184,
911
+ 349,
912
+ 199
913
+ ],
914
+ "page_idx": 5
915
+ },
916
+ {
917
+ "type": "text",
918
+ "text": "$P\\gets$ TopicMentionPairs $(M^v,T)$",
919
+ "bbox": [
920
+ 134,
921
+ 215,
922
+ 381,
923
+ 231
924
+ ],
925
+ "page_idx": 5
926
+ },
927
+ {
928
+ "type": "text",
929
+ "text": "$\\mathrm{Syn}_{\\mathbb{P}} \\gets \\text{SynonymousLemmaPairs}(P, G)$",
930
+ "bbox": [
931
+ 134,
932
+ 231,
933
+ 438,
934
+ 247
935
+ ],
936
+ "page_idx": 5
937
+ },
938
+ {
939
+ "type": "text",
940
+ "text": "$\\mathbf{P}_{\\mathrm{easy}}^{+}, \\mathbf{P}_{\\mathrm{hard}}^{-}, \\mathbf{P}_{\\mathrm{FN}}^{+}, \\mathbf{P}_{\\mathrm{TN}}^{-} \\gets \\mathrm{LH}(P, G, \\mathrm{Syn}_{\\mathbf{P}}, S^{v})$",
941
+ "bbox": [
942
+ 134,
943
+ 247,
944
+ 463,
945
+ 265
946
+ ],
947
+ "page_idx": 5
948
+ },
949
+ {
950
+ "type": "text",
951
+ "text": "$\\mathsf{D}_{\\mathrm{long}} \\gets \\mathsf{TrainCrossEncoder}(\\mathsf{P}_{\\mathrm{easy}}^{+}, \\mathsf{P}_{\\mathrm{hard}}^{-}, D^{v})$",
952
+ "bbox": [
953
+ 134,
954
+ 266,
955
+ 475,
956
+ 282
957
+ ],
958
+ "page_idx": 5
959
+ },
960
+ {
961
+ "type": "text",
962
+ "text": "$\\mathsf{D}_{\\mathrm{small}} \\gets \\mathsf{TrainCrossEncoder}(\\mathbf{P}_{\\mathrm{easy}}^{\\mathrm{p}}, \\mathbf{P}_{\\mathrm{hard}}^{\\mathrm{p}}, S^{v})$",
963
+ "bbox": [
964
+ 134,
965
+ 284,
966
+ 478,
967
+ 300
968
+ ],
969
+ "page_idx": 5
970
+ },
971
+ {
972
+ "type": "text",
973
+ "text": "return $\\mathrm{Syn}_{\\mathrm{P}}, \\mathrm{D}_{\\mathrm{long}}, \\mathrm{D}_{\\mathrm{small}}$",
974
+ "bbox": [
975
+ 134,
976
+ 300,
977
+ 321,
978
+ 316
979
+ ],
980
+ "page_idx": 5
981
+ },
982
+ {
983
+ "type": "text",
984
+ "text": "excluded in training. We do this to keep the training and prediction phases consistent and, to ensure the cross-encoder is not confused by the inclusion of these hard positive examples.",
985
+ "bbox": [
986
+ 110,
987
+ 346,
988
+ 487,
989
+ 411
990
+ ],
991
+ "page_idx": 5
992
+ },
993
+ {
994
+ "type": "text",
995
+ "text": "Additionally, for D with Longformer, we utilize the entire document for training, while for D with RoBERTa, we only use the sentence containing the mention to provide contextual information. We employ the Adam optimizer with a learning rate of 0.0001 for the classifier and 0.00001 for fine-tuning the Transformer model. This entire process is illustrated in Algorithm 1.",
996
+ "bbox": [
997
+ 110,
998
+ 412,
999
+ 487,
1000
+ 539
1001
+ ],
1002
+ "page_idx": 5
1003
+ },
1004
+ {
1005
+ "type": "text",
1006
+ "text": "To ensure optimal performance, we train our system separately for both the $\\mathrm{ECB + }$ and GVC training sets. We utilize a single NVIDIA A100 GPU",
1007
+ "bbox": [
1008
+ 112,
1009
+ 541,
1010
+ 489,
1011
+ 590
1012
+ ],
1013
+ "page_idx": 5
1014
+ },
1015
+ {
1016
+ "type": "image",
1017
+ "img_path": "images/7231456e8ebf517a9b3307d207136f8703102dabbb7f73dc12c72bd6892eaf5d.jpg",
1018
+ "image_caption": [],
1019
+ "image_footnote": [],
1020
+ "bbox": [
1021
+ 134,
1022
+ 605,
1023
+ 295,
1024
+ 709
1025
+ ],
1026
+ "page_idx": 5
1027
+ },
1028
+ {
1029
+ "type": "image",
1030
+ "img_path": "images/85879c8c0f3e39c710bebb23033d8cf7da70b7f7eb7015ec55978e2632877ba4.jpg",
1031
+ "image_caption": [],
1032
+ "image_footnote": [],
1033
+ "bbox": [
1034
+ 302,
1035
+ 604,
1036
+ 463,
1037
+ 709
1038
+ ],
1039
+ "page_idx": 5
1040
+ },
1041
+ {
1042
+ "type": "image",
1043
+ "img_path": "images/68725d14490616c1977c09cb93d6714468170e34cc41c9c6329510c78b04a646.jpg",
1044
+ "image_caption": [],
1045
+ "image_footnote": [],
1046
+ "bbox": [
1047
+ 141,
1048
+ 709,
1049
+ 295,
1050
+ 808
1051
+ ],
1052
+ "page_idx": 5
1053
+ },
1054
+ {
1055
+ "type": "image",
1056
+ "img_path": "images/706be8391a3493a91c96af1d018d8797ba173f7eea810c16bc07241e7d9bc55e.jpg",
1057
+ "image_caption": [],
1058
+ "image_footnote": [],
1059
+ "bbox": [
1060
+ 302,
1061
+ 711,
1062
+ 457,
1063
+ 808
1064
+ ],
1065
+ "page_idx": 5
1066
+ },
1067
+ {
1068
+ "type": "image",
1069
+ "img_path": "images/f33dbd0338c941eef83b7523c27eaa228c93725844f657810f9567caa8c54909.jpg",
1070
+ "image_caption": [
1071
+ "positive"
1072
+ ],
1073
+ "image_footnote": [],
1074
+ "bbox": [
1075
+ 211,
1076
+ 810,
1077
+ 233,
1078
+ 821
1079
+ ],
1080
+ "page_idx": 5
1081
+ },
1082
+ {
1083
+ "type": "image",
1084
+ "img_path": "images/d18353d9aa4ef3619b7ca2c64bc704e0354ecabccb0f349a1a6fbf8d1c506ff7.jpg",
1085
+ "image_caption": [
1086
+ "negative",
1087
+ "Figure 6: Training Samples of previous methods vs. ours. The heuristic creates a balanced and significantly smaller training set for $\\mathrm{ECB + }$ . For GVC, the heuristic discards half of the negative samples while somewhat balancing the dataset."
1088
+ ],
1089
+ "image_footnote": [],
1090
+ "bbox": [
1091
+ 294,
1092
+ 813,
1093
+ 310,
1094
+ 821
1095
+ ],
1096
+ "page_idx": 5
1097
+ },
1098
+ {
1099
+ "type": "text",
1100
+ "text": "Algorithm 2 Prediction Phase",
1101
+ "text_level": 1,
1102
+ "bbox": [
1103
+ 510,
1104
+ 84,
1105
+ 737,
1106
+ 99
1107
+ ],
1108
+ "page_idx": 5
1109
+ },
1110
+ {
1111
+ "type": "text",
1112
+ "text": "Require: $D$ : testing document set",
1113
+ "bbox": [
1114
+ 510,
1115
+ 103,
1116
+ 769,
1117
+ 118
1118
+ ],
1119
+ "page_idx": 5
1120
+ },
1121
+ {
1122
+ "type": "text",
1123
+ "text": "$T$ : gold/clustered topics",
1124
+ "bbox": [
1125
+ 529,
1126
+ 118,
1127
+ 709,
1128
+ 135
1129
+ ],
1130
+ "page_idx": 5
1131
+ },
1132
+ {
1133
+ "type": "text",
1134
+ "text": "$M^v$ : gold event mentions in $D$",
1135
+ "bbox": [
1136
+ 531,
1137
+ 136,
1138
+ 757,
1139
+ 149
1140
+ ],
1141
+ "page_idx": 5
1142
+ },
1143
+ {
1144
+ "type": "text",
1145
+ "text": "$S^v$ : sentences of the mentions",
1146
+ "bbox": [
1147
+ 531,
1148
+ 152,
1149
+ 752,
1150
+ 165
1151
+ ],
1152
+ "page_idx": 5
1153
+ },
1154
+ {
1155
+ "type": "text",
1156
+ "text": "$\\mathrm{Syn_p}$ : synonymous lemma pairs from training",
1157
+ "bbox": [
1158
+ 529,
1159
+ 167,
1160
+ 868,
1161
+ 183
1162
+ ],
1163
+ "page_idx": 5
1164
+ },
1165
+ {
1166
+ "type": "text",
1167
+ "text": "$\\mathsf{D}_{\\mathrm{small}}, \\mathsf{D}_{\\mathrm{long}}$ : trained CE discriminators",
1168
+ "bbox": [
1169
+ 529,
1170
+ 185,
1171
+ 815,
1172
+ 199
1173
+ ],
1174
+ "page_idx": 5
1175
+ },
1176
+ {
1177
+ "type": "text",
1178
+ "text": "$P\\gets$ TopicMentionPairs $(M^v,T)$",
1179
+ "bbox": [
1180
+ 529,
1181
+ 215,
1182
+ 776,
1183
+ 231
1184
+ ],
1185
+ "page_idx": 5
1186
+ },
1187
+ {
1188
+ "type": "text",
1189
+ "text": "$\\mathrm{A_H,P^+}\\leftarrow \\mathrm{LH}(P,\\mathrm{Syn_P},S^v)$",
1190
+ "bbox": [
1191
+ 529,
1192
+ 231,
1193
+ 734,
1194
+ 247
1195
+ ],
1196
+ "page_idx": 5
1197
+ },
1198
+ {
1199
+ "type": "text",
1200
+ "text": "$\\mathrm{A_P}\\leftarrow \\mathsf{D}_{\\mathrm{small}}(\\mathbf{P}^{+}) > 0.5$",
1201
+ "bbox": [
1202
+ 529,
1203
+ 248,
1204
+ 707,
1205
+ 263
1206
+ ],
1207
+ "page_idx": 5
1208
+ },
1209
+ {
1210
+ "type": "text",
1211
+ "text": "$\\mathrm{A_P}\\leftarrow \\mathsf{D_{long}}(\\mathbb{P}^+) > 0.5$",
1212
+ "bbox": [
1213
+ 529,
1214
+ 265,
1215
+ 702,
1216
+ 280
1217
+ ],
1218
+ "page_idx": 5
1219
+ },
1220
+ {
1221
+ "type": "text",
1222
+ "text": "return ConnectedComponents $(\\mathrm{AH})$",
1223
+ "bbox": [
1224
+ 529,
1225
+ 284,
1226
+ 801,
1227
+ 299
1228
+ ],
1229
+ "page_idx": 5
1230
+ },
1231
+ {
1232
+ "type": "text",
1233
+ "text": "ConnectedComponents $(\\mathrm{A_P})$",
1234
+ "bbox": [
1235
+ 584,
1236
+ 300,
1237
+ 793,
1238
+ 315
1239
+ ],
1240
+ "page_idx": 5
1241
+ },
1242
+ {
1243
+ "type": "text",
1244
+ "text": "with 80GB memory to train $\\mathsf{D}_{\\mathrm{long}}$ with the Longformer model, and a single NVIDIA RTX 3090 GPU (24 GB) for training $\\mathsf{D}_{\\mathrm{small}}$ with the RoBERTa-BASE model. We train each system for 10 epochs, with each epoch taking approximately one hour for the Longformer model and 15 minutes for the RoBERTa model.",
1245
+ "bbox": [
1246
+ 507,
1247
+ 346,
1248
+ 884,
1249
+ 457
1250
+ ],
1251
+ "page_idx": 5
1252
+ },
1253
+ {
1254
+ "type": "text",
1255
+ "text": "5.3 Prediction Phase",
1256
+ "text_level": 1,
1257
+ "bbox": [
1258
+ 507,
1259
+ 464,
1260
+ 687,
1261
+ 478
1262
+ ],
1263
+ "page_idx": 5
1264
+ },
1265
+ {
1266
+ "type": "text",
1267
+ "text": "In the prediction phase, we first pass the mention pairs through the heuristic and create an adjacency matrix called $\\mathrm{A_H}$ based on its coreferent predictions. The ones predicted not coreferent by the heuristic are discarded. This step is crucial in terms of making the task tractable. Next, we pass the mention pairs that are predicted to be coreferent by the heuristic through $\\mathrm{D_{small}}$ and $\\mathrm{D_{long}}$ separately. Using the subsequent coreferent predictions from these models, we generate another adjacency matrix $\\mathrm{A_P}$ . To create event clusters, we use these matrices to identify connected components.",
1268
+ "bbox": [
1269
+ 507,
1270
+ 482,
1271
+ 882,
1272
+ 675
1273
+ ],
1274
+ "page_idx": 5
1275
+ },
1276
+ {
1277
+ "type": "text",
1278
+ "text": "As a baseline, we use the matrix $\\mathrm{A_H}$ to generate the clusters. We then use $\\mathrm{A_P}$ to assess the improvements made by using $\\mathrm{D_{small}}$ and $\\mathrm{D_{long}}$ over the baseline. This process is illustrated in Algorithm 2. The process takes between 6-10 minutes to run the Longformer model and between 1-2 minutes to run the RoBERTa one.",
1279
+ "bbox": [
1280
+ 507,
1281
+ 677,
1282
+ 882,
1283
+ 788
1284
+ ],
1285
+ "page_idx": 5
1286
+ },
1287
+ {
1288
+ "type": "text",
1289
+ "text": "6 Results",
1290
+ "text_level": 1,
1291
+ "bbox": [
1292
+ 507,
1293
+ 802,
1294
+ 606,
1295
+ 816
1296
+ ],
1297
+ "page_idx": 5
1298
+ },
1299
+ {
1300
+ "type": "text",
1301
+ "text": "We evaluate the event clusters formed using the standard coreference evaluation metrics (MUC, $B^{3}$ , $CEAF_{e}$ , LEA and CoNLL F1—the average of MUC, $B^{3}$ and $CEAF_{e}$ Vilain et al. (1995); Bagga and Baldwin (1998); Luo (2005); Luo et al. (2014); Pradhan et al. (2014); Moosavi et al. (2019)). We",
1302
+ "bbox": [
1303
+ 507,
1304
+ 822,
1305
+ 882,
1306
+ 917
1307
+ ],
1308
+ "page_idx": 5
1309
+ },
1310
+ {
1311
+ "type": "table",
1312
+ "img_path": "images/ab2d6dd8c5736065bbbf73b3eaf56422749258c9668cab73a49407a6aa926d66.jpg",
1313
+ "table_caption": [],
1314
+ "table_footnote": [],
1315
+ "table_body": "<table><tr><td rowspan=\"2\">Methods</td><td colspan=\"2\">CoNLL F1</td></tr><tr><td>ECB+</td><td>GVC</td></tr><tr><td>Bugert et al. (2021)</td><td>-</td><td>59.4</td></tr><tr><td>Cattan et al. (2021)</td><td>81.0</td><td>-</td></tr><tr><td>Caciularu et al. (2021)</td><td>85.6</td><td>-</td></tr><tr><td>Held et al. (2021)</td><td>85.7</td><td>83.7</td></tr><tr><td>LH</td><td>76.4</td><td>51.8</td></tr><tr><td>LH + Dsmall</td><td>80.3</td><td>73.7</td></tr><tr><td>LH + Dlong</td><td>81.7</td><td>75.0</td></tr><tr><td>LHOra</td><td>81.9</td><td>53.4</td></tr><tr><td>LHOra + Dsmall</td><td>85.9</td><td>75.4</td></tr><tr><td>LHOra + Dlong</td><td>87.4</td><td>76.1</td></tr></table>",
1316
+ "bbox": [
1317
+ 154,
1318
+ 80,
1319
+ 445,
1320
+ 294
1321
+ ],
1322
+ "page_idx": 6
1323
+ },
1324
+ {
1325
+ "type": "text",
1326
+ "text": "Table 2: Results on within and cross-document event coreference resolution on $\\mathrm{ECB + }$ and GVC test sets.",
1327
+ "bbox": [
1328
+ 112,
1329
+ 306,
1330
+ 485,
1331
+ 332
1332
+ ],
1333
+ "page_idx": 6
1334
+ },
1335
+ {
1336
+ "type": "text",
1337
+ "text": "run the baseline results (LH and $\\mathrm{LH_{Ora}}$ ) and the combination of each heuristic with the two discriminators $(\\mathrm{LH} / \\mathrm{LH_{Ora}} + \\mathrm{D_{small}} / \\mathrm{D_{long}})$ . We compare to previous methods for $\\mathrm{ECB+}$ and GVC as shown in Table 2. Bold indicates current or previous SOTA and our best model.",
1338
+ "bbox": [
1339
+ 112,
1340
+ 351,
1341
+ 487,
1342
+ 445
1343
+ ],
1344
+ "page_idx": 6
1345
+ },
1346
+ {
1347
+ "type": "text",
1348
+ "text": "CoNLL F1 scores show that LH and $\\mathrm{LH_{Ora}}$ are strong baselines for the $\\mathrm{ECB + }$ corpus, where $\\mathrm{LH_{Ora}}$ surpasses some of the previous best methods. From this, we can say that making improvements in the heuristic by better methods of finding synonymous lemma pairs is a viable solution for tackling $\\mathrm{ECB + }$ with a heuristic. However, the heuristics fall short for GVC, where $\\mathrm{LH_{Ora}}$ is only marginally better than LH. This may be due to the lower variation in lemmas in the GVC corpus. We hypothesize methods that can automatically detect synonymous lemma pairs will not be beneficial for GVC, and LH itself is sufficient as a heuristic here.",
1349
+ "bbox": [
1350
+ 112,
1351
+ 450,
1352
+ 487,
1353
+ 657
1354
+ ],
1355
+ "page_idx": 6
1356
+ },
1357
+ {
1358
+ "type": "text",
1359
+ "text": "The discriminators consistently make significant improvements over the heuristics across both datasets. For $\\mathrm{ECB + }$ $\\mathsf{D}_{\\mathrm{long}}$ is nearly 2 points better than $\\mathsf{D}_{\\mathrm{small}}$ in terms of the CoNLL measure. Both $\\mathsf{D}_{\\mathrm{small}}$ and $\\mathsf{D}_{\\mathrm{long}}$ when coupled with $\\mathsf{LH}_{\\mathrm{Ora}}$ surpass the state of the art for this dataset. $\\mathsf{LH} + \\mathsf{D}_{\\mathrm{long}}$ beats Cattan et al. (2021) but falls short of SOTA, albeit by only 4 points. On GVC, both fall short of SOTA (Held et al., 2021) by only 8-9 points on CoNLL F1, with substantially fewer computations. In terms of computational cost-to-performance ratio, as we elaborate in §7.1, our methods outperform all the previous methods.",
1360
+ "bbox": [
1361
+ 112,
1362
+ 659,
1363
+ 487,
1364
+ 868
1365
+ ],
1366
+ "page_idx": 6
1367
+ },
1368
+ {
1369
+ "type": "text",
1370
+ "text": "For ECR, where context is key, we would expect better performance from encoders with longer context. $\\mathsf{D}_{\\mathrm{long}}$ and $\\mathsf{D}_{\\mathrm{small}}$ show this trend for both",
1371
+ "bbox": [
1372
+ 112,
1373
+ 871,
1374
+ 487,
1375
+ 919
1376
+ ],
1377
+ "page_idx": 6
1378
+ },
1379
+ {
1380
+ "type": "image",
1381
+ "img_path": "images/5fdf4f5b8c80380ee5a0d808c46d90892a38e95187a9fefa7a6e2b9f87e4d9e5.jpg",
1382
+ "image_caption": [
1383
+ "Figure 7: Prediction Phase Time Complexity in terms of Mention Pair Encoding."
1384
+ ],
1385
+ "image_footnote": [],
1386
+ "bbox": [
1387
+ 517,
1388
+ 86,
1389
+ 831,
1390
+ 282
1391
+ ],
1392
+ "page_idx": 6
1393
+ },
1394
+ {
1395
+ "type": "text",
1396
+ "text": "ECB+ and GVC datasets. However, the gain we get from using the entire document is not substantial for the amount of additional computation required. An interesting line of future work would to automatically detect the core sections in the document that contribute to coreference and then only use that as context for ECR.",
1397
+ "bbox": [
1398
+ 507,
1399
+ 334,
1400
+ 882,
1401
+ 445
1402
+ ],
1403
+ "page_idx": 6
1404
+ },
1405
+ {
1406
+ "type": "text",
1407
+ "text": "7 Discussion",
1408
+ "text_level": 1,
1409
+ "bbox": [
1410
+ 509,
1411
+ 458,
1412
+ 636,
1413
+ 474
1414
+ ],
1415
+ "page_idx": 6
1416
+ },
1417
+ {
1418
+ "type": "text",
1419
+ "text": "7.1 Time Complexity Analysis",
1420
+ "text_level": 1,
1421
+ "bbox": [
1422
+ 507,
1423
+ 485,
1424
+ 763,
1425
+ 501
1426
+ ],
1427
+ "page_idx": 6
1428
+ },
1429
+ {
1430
+ "type": "text",
1431
+ "text": "The heuristic is a very fast process that scales linearly with the number of mentions in a corpus. Specifically, by hashing the lemma pairs and sentence token lemmas, this step performs linear comparisons of mention pairs at prediction. The mention pair cross-encoding with Transformer is a computationally intensive process. A method that encodes all mention pairs in a large corpus can become intractable. Our method, however, is linear in complexity with the number of mentions, as shown in Figure 7, and outperforms previous methods in terms of computational efficiency. While Held et al. (2021)'s cross-encoding at prediction is linear $(5^{*}\\mathfrak{n})$ , their pruning step is quadratic. They rely additionally on training a bi-encoder and a mention neighborhood detector step that requires GPUs.",
1432
+ "bbox": [
1433
+ 505,
1434
+ 507,
1435
+ 882,
1436
+ 764
1437
+ ],
1438
+ "page_idx": 6
1439
+ },
1440
+ {
1441
+ "type": "text",
1442
+ "text": "7.2 Synonymous Lemma Pairs",
1443
+ "text_level": 1,
1444
+ "bbox": [
1445
+ 507,
1446
+ 772,
1447
+ 766,
1448
+ 788
1449
+ ],
1450
+ "page_idx": 6
1451
+ },
1452
+ {
1453
+ "type": "text",
1454
+ "text": "We have established an upper limit for ECR using the $\\mathrm{LH}_{\\mathrm{Ora}} + \\mathrm{D}_{\\mathrm{long}}$ method for $\\mathrm{ECB+}$ . Previous methods such as Held et al. (2021), use an oracle coreference scorer after their pruning step. In other words, their oracle assumption involves using a perfect cross-encoder. In contrast, we only use the oracle for pruning by assuming a perfect set of synonymous lemma pairs. This means that",
1455
+ "bbox": [
1456
+ 505,
1457
+ 790,
1458
+ 882,
1459
+ 917
1460
+ ],
1461
+ "page_idx": 6
1462
+ },
1463
+ {
1464
+ "type": "text",
1465
+ "text": "improved pruning methods can lead to better ECR performance. We believe that it is possible to create a more effective synonymous pair detector than $\\mathrm{LH}_{\\mathrm{Ora}}$ by adopting recent work on predicate class detection (Brown et al., 2014, 2022) that use VerbNet (Schuler, 2005). In future research, we aim to enhance the process of generating synonymous pairs through the use of cross-encoding or additional steps such as word sense disambiguation with the Proposition Bank (Palmer et al., 2005; Pradhan et al., 2022). Identifying the sense of the trigger will help refine the lemma pairs that appear in coreference chains. Additionally, annotating the sense of the trigger is a straightforward process that can be easily incorporated into annotation procedures for new datasets, which is more efficient than coreference annotations.",
1466
+ "bbox": [
1467
+ 112,
1468
+ 84,
1469
+ 489,
1470
+ 357
1471
+ ],
1472
+ "page_idx": 7
1473
+ },
1474
+ {
1475
+ "type": "text",
1476
+ "text": "7.3 Qualitative Error Analysis",
1477
+ "text_level": 1,
1478
+ "bbox": [
1479
+ 112,
1480
+ 367,
1481
+ 371,
1482
+ 382
1483
+ ],
1484
+ "page_idx": 7
1485
+ },
1486
+ {
1487
+ "type": "text",
1488
+ "text": "We carry out a comprehensive analysis on errors the discriminator makes after the heuristic's predictions. Unlike previous methods (Barhom et al., 2019) where they sample a subset of mentions to carry out the error analysis, we do so for the entire dataset. By efficiently discarding the large number of $\\mathrm{P}_{\\mathrm{TN}}^{-}$ , we are able to isolate the shortcomings of the crossencoder, analyze them and offer solutions. Table 6 in Appendix C lists the various kinds of errors (incorrect and missing links) made by $\\mathrm{D}_{\\mathrm{small}}$ on the $\\mathrm{ECB}+$ and GVC dev sets.",
1489
+ "bbox": [
1490
+ 112,
1491
+ 386,
1492
+ 489,
1493
+ 561
1494
+ ],
1495
+ "page_idx": 7
1496
+ },
1497
+ {
1498
+ "type": "text",
1499
+ "text": "We find error categories like same-sentence pronouns, weak temporal reasoning, ambiguity due to corefering entities, misleading lexical similarity, and missed set-member coreferent links. Table 6 in the appendix presents examples of each.",
1500
+ "bbox": [
1501
+ 112,
1502
+ 564,
1503
+ 489,
1504
+ 644
1505
+ ],
1506
+ "page_idx": 7
1507
+ },
1508
+ {
1509
+ "type": "text",
1510
+ "text": "Incorrect links due to same-sentence pronouns like \"it\" and \"this\" can be avoided by refining the heuristics-based mention-pair generation process to exclude same-sentence pronouns. Similarly, ambiguous temporal contexts like \"Saturday\" and \"New Year's Day\" that refer to the day of occurrence of the same event in articles published on different dates can be resolved by leveraging more temporal context/metadata where available. Also, errors in lexically-different but semantically similar event mention lemmas can be reduced by leveraging more-enriched contextual representations.",
1511
+ "bbox": [
1512
+ 112,
1513
+ 645,
1514
+ 489,
1515
+ 837
1516
+ ],
1517
+ "page_idx": 7
1518
+ },
1519
+ {
1520
+ "type": "text",
1521
+ "text": "By using the Oracle for pruning, we can focus on where $\\mathsf{D}_{\\mathrm{small}}$ falls short in terms of false positives. We first sort the final event clusters based on purity (number of non-coreferent links within the cluster compared to ground truth). Next, we identify",
1522
+ "bbox": [
1523
+ 112,
1524
+ 839,
1525
+ 489,
1526
+ 920
1527
+ ],
1528
+ "page_idx": 7
1529
+ },
1530
+ {
1531
+ "type": "text",
1532
+ "text": "pairs that the discriminator incorrectly predicted to be coreferent within these clusters, specifically focusing on highly impure clusters. We look for these pairs in highly impure clusters and analyze the mention sentences. Our findings are as follows:",
1533
+ "bbox": [
1534
+ 507,
1535
+ 84,
1536
+ 884,
1537
+ 164
1538
+ ],
1539
+ "page_idx": 7
1540
+ },
1541
+ {
1542
+ "type": "list",
1543
+ "sub_type": "text",
1544
+ "list_items": [
1545
+ "- Problems caused when two big clusters are joined through very similar (almost adversarial) examples, e.g., \"British hiker\" vs. \"New Zealand hiker.\" This error can be fixed by performing an additional level of clustering, such as, K-means.",
1546
+ "- Problems with set-member relations, such as \"shootings\" being grouped with specific \"shooting\" events. The sets often include many non-coreferent member events. To address this issue, we can identify whether an event is plural or singular prior to coreference resolution.",
1547
+ "- Contrary to the notion that singleton mentions cause the most errors, we found that singletons appear in the least impure clusters. This means the cross-encoder discriminator is good in separating out singletons."
1548
+ ],
1549
+ "bbox": [
1550
+ 531,
1551
+ 170,
1552
+ 885,
1553
+ 467
1554
+ ],
1555
+ "page_idx": 7
1556
+ },
1557
+ {
1558
+ "type": "text",
1559
+ "text": "8 Conclusion & Future work",
1560
+ "text_level": 1,
1561
+ "bbox": [
1562
+ 507,
1563
+ 475,
1564
+ 778,
1565
+ 491
1566
+ ],
1567
+ "page_idx": 7
1568
+ },
1569
+ {
1570
+ "type": "text",
1571
+ "text": "We showed that a simple heuristic paired with a crossencoder does comparable ECR to more complicated methods while being computationally efficient. We set a upper bound for the performance on $\\mathrm{ECB + }$ suggesting improvement with better synonyms pairs detection we can achieve better results. Through extensive error analysis, we presented the shortcomings of the crossencoder in this task and suggested ways to improve it.",
1572
+ "bbox": [
1573
+ 507,
1574
+ 500,
1575
+ 884,
1576
+ 645
1577
+ ],
1578
+ "page_idx": 7
1579
+ },
1580
+ {
1581
+ "type": "text",
1582
+ "text": "Future research directions include applying our method to the more challenging task of cross-subtopic event coreference (e.g., FCC (Bugert et al., 2020)) where scalability and compute-efficiency are crucial metrics, making the current heuristic-based mention pair generation process \"learnable\" using an auxiliary cross-encoder, and incorporating word-sense disambiguation and lemma-pair annotations into the pipeline to resolve lexical ambiguity. An exciting direction for future work made tractable by our work is to incorporate additional cross-encoding features into the pipeline, especially using the latest advancements in visual transformers (Dosovitskiy et al., 2021; Bao et al., 2021; Liu et al., 2021; Radford et al., 2021). Another important direction is to test our method on languages with a richer morphology than English.",
1583
+ "bbox": [
1584
+ 507,
1585
+ 646,
1586
+ 885,
1587
+ 919
1588
+ ],
1589
+ "page_idx": 7
1590
+ },
1591
+ {
1592
+ "type": "text",
1593
+ "text": "Limitations",
1594
+ "text_level": 1,
1595
+ "bbox": [
1596
+ 114,
1597
+ 83,
1598
+ 220,
1599
+ 99
1600
+ ],
1601
+ "page_idx": 8
1602
+ },
1603
+ {
1604
+ "type": "text",
1605
+ "text": "The most evident limitation of this research is that is has only been demonstrated on English coreference. Using a lemma-based heuristic requires using a lemmatization algorithm in the preprocessing phase and for more morphologically complex languages, especially low-resourced ones, lemmatization technology is less well-developed and may not be a usable part of our pipeline. Application to more morphologically-rich languages is among our planned research directions.",
1606
+ "bbox": [
1607
+ 112,
1608
+ 110,
1609
+ 489,
1610
+ 271
1611
+ ],
1612
+ "page_idx": 8
1613
+ },
1614
+ {
1615
+ "type": "text",
1616
+ "text": "In addition, all our experiments are performed on the gold standard mentions from $\\mathrm{ECB + }$ and GVC, meaning that coreference resolution is effectively independent of mention detection, and therefore we have no evidence how our method would fare in a pipeline where the two are coupled.",
1617
+ "bbox": [
1618
+ 112,
1619
+ 272,
1620
+ 489,
1621
+ 369
1622
+ ],
1623
+ "page_idx": 8
1624
+ },
1625
+ {
1626
+ "type": "text",
1627
+ "text": "A further limitation is that training of the cross-encoders still requires intensive usage of GPU hardware (the GPU used for training Longformer is particularly high-end).",
1628
+ "bbox": [
1629
+ 112,
1630
+ 370,
1631
+ 489,
1632
+ 434
1633
+ ],
1634
+ "page_idx": 8
1635
+ },
1636
+ {
1637
+ "type": "text",
1638
+ "text": "Ethics Statement",
1639
+ "text_level": 1,
1640
+ "bbox": [
1641
+ 114,
1642
+ 447,
1643
+ 265,
1644
+ 463
1645
+ ],
1646
+ "page_idx": 8
1647
+ },
1648
+ {
1649
+ "type": "text",
1650
+ "text": "We use publicly-available datasets, meaning any bias or offensive content in those datasets risks being reflected in our results. By its nature, the Gun Violence Corpus contains violent content that may be troubling for some.",
1651
+ "bbox": [
1652
+ 112,
1653
+ 475,
1654
+ 487,
1655
+ 555
1656
+ ],
1657
+ "page_idx": 8
1658
+ },
1659
+ {
1660
+ "type": "text",
1661
+ "text": "We make extensive use of GPUs for training the discriminator models as part of our pipeline. While this has implications for resource consumption and access implications for those without similar hardware, the linear time complexity of our solution presents a way forward that relies less overall on GPU hardware than previous approaches, increasing the ability to perform event coreference resolution in low-compute settings.",
1662
+ "bbox": [
1663
+ 112,
1664
+ 556,
1665
+ 489,
1666
+ 702
1667
+ ],
1668
+ "page_idx": 8
1669
+ },
1670
+ {
1671
+ "type": "text",
1672
+ "text": "Acknowledgements",
1673
+ "text_level": 1,
1674
+ "bbox": [
1675
+ 114,
1676
+ 715,
1677
+ 285,
1678
+ 732
1679
+ ],
1680
+ "page_idx": 8
1681
+ },
1682
+ {
1683
+ "type": "text",
1684
+ "text": "We would like to express our sincere gratitude to the anonymous reviewers whose insightful comments and constructive feedback helped to greatly improve the quality of this paper. We gratefully acknowledge the support of U.S. Defense Advanced Research Projects Agency (DARPA) FA8750-18-2-0016-AIDA - RAMFIS: Representations of vectors and Abstract Meanings for Information Synthesis. Any opinions, findings, conclusions, or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of",
1685
+ "bbox": [
1686
+ 112,
1687
+ 741,
1688
+ 489,
1689
+ 919
1690
+ ],
1691
+ "page_idx": 8
1692
+ },
1693
+ {
1694
+ "type": "text",
1695
+ "text": "DARPA or the U.S. government. Finally, we extend our thanks to the BoulderNLP group and the SIG-NAL Lab at Colorado State for their valuable input and collaboration throughout the development of this work.",
1696
+ "bbox": [
1697
+ 507,
1698
+ 84,
1699
+ 884,
1700
+ 164
1701
+ ],
1702
+ "page_idx": 8
1703
+ },
1704
+ {
1705
+ "type": "text",
1706
+ "text": "References",
1707
+ "text_level": 1,
1708
+ "bbox": [
1709
+ 509,
1710
+ 191,
1711
+ 608,
1712
+ 206
1713
+ ],
1714
+ "page_idx": 8
1715
+ },
1716
+ {
1717
+ "type": "list",
1718
+ "sub_type": "ref_text",
1719
+ "list_items": [
1720
+ "Emily Allaway, Shuai Wang, and Miguel Ballesteros. 2021. Sequential cross-document coreference resolution. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 4659-4671, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.",
1721
+ "Amit Bagga and Breck Baldwin. 1998. Algorithms for scoring coreference chains. In In The First International Conference on Language Resources and Evaluation Workshop on Linguistics Coreference, pages 563-566.",
1722
+ "Breck Baldwin. 1997. CogNIAC: high precision coreference with limited knowledge and linguistic resources. In *Operational Factors in Practical, Robust Anaphora Resolution for Unrestricted Texts*.",
1723
+ "Hangbo Bao, Li Dong, Songhao Piao, and Furu Wei. 2021. Beit: Bert pre-training of image transformers. arXiv preprint arXiv:2106.08254.",
1724
+ "Shany Barhom, Vered Shwartz, Alon Eirew, Michael Bugert, Nils Reimers, and Ido Dagan. 2019. Revisiting joint modeling of cross-document entity and event coreference resolution. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4179-4189, Florence, Italy. Association for Computational Linguistics.",
1725
+ "Cosmin Bejan and Sanda Harabagiu. 2010. Unsupervised event coreference resolution with rich linguistic features. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 1412-1422, Uppsala, Sweden. Association for Computational Linguistics.",
1726
+ "Iz Beltagy, Matthew E Peters, and Arman Cohan. 2020. Longformer: The long-document transformer. arXiv e-prints, pages arXiv-2004.",
1727
+ "Susan Windisch Brown, Julia Bonn, Ghazaleh Kazeminejad, Annie Zaenen, James Pustejovsky, and Martha Palmer. 2022. Semantic representations for nlp using verbnet and the generative lexicon. Frontiers in artificial intelligence, 5.",
1728
+ "Susan Windisch Brown, Dmitriy Dligach, and Martha Palmer. 2014. Verbnet class assignment as a wsd task. Computing Meaning: Volume 4, pages 203-216.",
1729
+ "Michael Bugert, Nils Reimers, Shany Barhom, Ido Dagan, and Iryna Gurevych. 2020. Breaking the subtopic barrier in cross-document event coreference resolution. In Text2story@ecir, pages 23-29."
1730
+ ],
1731
+ "bbox": [
1732
+ 510,
1733
+ 212,
1734
+ 885,
1735
+ 919
1736
+ ],
1737
+ "page_idx": 8
1738
+ },
1739
+ {
1740
+ "type": "list",
1741
+ "sub_type": "ref_text",
1742
+ "list_items": [
1743
+ "Michael Bugert, Nils Reimers, and Iryna Gurevych. 2021. Generalizing cross-document event coreference resolution across multiple corpora. Computational Linguistics, 47(3):575-614.",
1744
+ "Avi Caciularu, Arman Cohan, Iz Beltagy, Matthew Peters, Arie Cattan, and Ido Dagan. 2021. CDLM: Cross-document language modeling. In *Findings of the Association for Computational Linguistics: EMNLP* 2021, pages 2648-2662, Punta Cana, Dominican Republic. Association for Computational Linguistics.",
1745
+ "Arie Cattan, Alon Eirew, Gabriel Stanovsky, Mandar Joshi, and Ido Dagan. 2021. Cross-document coreference resolution over predicted mentions. In *Findings of the Association for Computational Linguistics: ACL-IJCNLP* 2021, pages 5100-5107, Online. Association for Computational Linguistics.",
1746
+ "Agata Cybulska and Piek Vossen. 2014. Using a sledgehammer to crack a nut? lexical diversity and event coreference resolution. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14), pages 4545-4552, Reykjavik, Iceland. European Language Resources Association (ELRA).",
1747
+ "Agata Cybulska and Piek Vossen. 2015. Translating granularity of event slots into features for event coreference resolution. In Proceedings of the The 3rd Workshop on EVENTS: Definition, Detection, Coreference, and Representation, pages 1-10, Denver, Colorado. Association for Computational Linguistics.",
1748
+ "Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. 2021. An image is worth 16x16 words: Transformers for image recognition at scale. In International Conference on Learning Representations.",
1749
+ "William Held, Dan Iter, and Dan Jurafsky. 2021. Focus on what matters: Applying discourse coherence theory to cross document coreference. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 1406-1417, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.",
1750
+ "Kian Kenyon-Dean, Jackie Chi Kit Cheung, and Doina Precup. 2018. Resolving event coreference with supervised representation learning and clustering-oriented regularization. arXiv preprint arXiv:1805.10985.",
1751
+ "Heeyoung Lee, Angel Chang, Yves Peirsman, Nathanael Chambers, Mihai Surdeanu, and Dan Jurafsky. 2013. Deterministic coreference resolution based on entity-centric, precision-ranked rules. Computational linguistics, 39(4):885-916.",
1752
+ "Heeyoung Lee, Marta Recasens, Angel Chang, Mihai Surdeanu, and Dan Jurafsky. 2012. Joint entity and"
1753
+ ],
1754
+ "bbox": [
1755
+ 115,
1756
+ 85,
1757
+ 489,
1758
+ 917
1759
+ ],
1760
+ "page_idx": 9
1761
+ },
1762
+ {
1763
+ "type": "list",
1764
+ "sub_type": "ref_text",
1765
+ "list_items": [
1766
+ "event coreference resolution across documents. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 489-500, Jeju Island, Korea. Association for Computational Linguistics.",
1767
+ "Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, and Baining Guo. 2021. Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF international conference on computer vision, pages 10012-10022.",
1768
+ "Xiaoqiang Luo. 2005. On coreference resolution performance metrics. In Proceedings of the Conference on Human Language Technology and Empirical Methods in Natural Language Processing, HLT '05, page 25-32, USA. Association for Computational Linguistics.",
1769
+ "Xiaoqiang Luo, Sameer Pradhan, Marta Recasens, and Eduard Hovy. 2014. An extension of BLANC to system mentions. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 24-29, Baltimore, Maryland. Association for Computational Linguistics.",
1770
+ "Yehudit Meged, Avi Caciularu, Vered Shwartz, and Ido Dagan. 2020. Paraphrasing vs coreferring: Two sides of the same coin. In *Findings of the Association for Computational Linguistics: EMNLP* 2020, pages 4897-4907, Online. Association for Computational Linguistics.",
1771
+ "Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781.",
1772
+ "Nafise Sadat Moosavi, Leo Born, Massimo Poesio, and Michael Strube. 2019. Using automatically extracted minimum spans to disentangle coreference evaluation from boundary detection. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4168-4178, Florence, Italy. Association for Computational Linguistics.",
1773
+ "Martha Palmer, Daniel Gildea, and Paul Kingsbury. 2005. The Proposition Bank: An annotated corpus of semantic roles. Computational Linguistics, 31(1):71-106.",
1774
+ "Marten Postma, Filip Ilievski, and Piek Vossen. 2018. SemEval-2018 task 5: Counting events and participants in the long tail. In Proceedings of The 12th International Workshop on Semantic Evaluation, pages 70–80, New Orleans, Louisiana. Association for Computational Linguistics.",
1775
+ "Sameer Pradhan, Julia Bonn, Skatje Myers, Kathryn Conger, Tim O'gorman, James Gung, Kristin Wright-bettner, and Martha Palmer. 2022. PropBank comes of Age—Larger, smarter, and more diverse. In Proceedings of the 11th Joint Conference on Lexical and"
1776
+ ],
1777
+ "bbox": [
1778
+ 510,
1779
+ 85,
1780
+ 882,
1781
+ 917
1782
+ ],
1783
+ "page_idx": 9
1784
+ },
1785
+ {
1786
+ "type": "text",
1787
+ "text": "Computational Semantics, pages 278-288, Seattle, Washington. Association for Computational Linguistics.",
1788
+ "bbox": [
1789
+ 131,
1790
+ 85,
1791
+ 489,
1792
+ 124
1793
+ ],
1794
+ "page_idx": 10
1795
+ },
1796
+ {
1797
+ "type": "text",
1798
+ "text": "Sameer Pradhan, Xiaogiang Luo, Marta Recasens, Edward Hovy, Vincent Ng, and Michael Strube. 2014. Scoring coreference partitions of predicted mentions: A reference implementation. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 30-35, Baltimore, Maryland. Association for Computational Linguistics.",
1799
+ "bbox": [
1800
+ 114,
1801
+ 135,
1802
+ 489,
1803
+ 242
1804
+ ],
1805
+ "page_idx": 10
1806
+ },
1807
+ {
1808
+ "type": "text",
1809
+ "text": "Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. 2021. Learning transferable visual models from natural language supervision. In International conference on machine learning, pages 8748-8763. PMLR.",
1810
+ "bbox": [
1811
+ 114,
1812
+ 250,
1813
+ 489,
1814
+ 331
1815
+ ],
1816
+ "page_idx": 10
1817
+ },
1818
+ {
1819
+ "type": "text",
1820
+ "text": "Karin Kipper Schuler. 2005. VerbNet: A broad-coverage, comprehensive verb lexicon. University of Pennsylvania.",
1821
+ "bbox": [
1822
+ 114,
1823
+ 338,
1824
+ 489,
1825
+ 380
1826
+ ],
1827
+ "page_idx": 10
1828
+ },
1829
+ {
1830
+ "type": "text",
1831
+ "text": "Veselin Stoyanov and Jason Eisner. 2012. Easy-first coreference resolution. In Proceedings of COLING 2012, pages 2519-2534.",
1832
+ "bbox": [
1833
+ 114,
1834
+ 388,
1835
+ 489,
1836
+ 430
1837
+ ],
1838
+ "page_idx": 10
1839
+ },
1840
+ {
1841
+ "type": "text",
1842
+ "text": "Marc Vilain, John Burger, John Aberdeen, Dennis Connolly, and Lynette Hirschman. 1995. A model-theoretic coreference scoring scheme. In Proceedings of the 6th Conference on Message Understanding, MUC6 '95, page 45-52, USA. Association for Computational Linguistics.",
1843
+ "bbox": [
1844
+ 114,
1845
+ 439,
1846
+ 489,
1847
+ 519
1848
+ ],
1849
+ "page_idx": 10
1850
+ },
1851
+ {
1852
+ "type": "text",
1853
+ "text": "Piek Vossen, Filip Ilievski, Marten Postma, and Roxane Segers. 2018. Don't annotate, but validate: A data-to-text method for capturing event data. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018).",
1854
+ "bbox": [
1855
+ 114,
1856
+ 527,
1857
+ 489,
1858
+ 595
1859
+ ],
1860
+ "page_idx": 10
1861
+ },
1862
+ {
1863
+ "type": "text",
1864
+ "text": "Xiaodong Yu, Wenpeng Yin, and Dan Roth. 2022a. Pairwise representation learning for event coreference. In Proceedings of the 11th Joint Conference on Lexical and Computational Semantics, pages 69-78, Seattle, Washington. Association for Computational Linguistics.",
1865
+ "bbox": [
1866
+ 114,
1867
+ 604,
1868
+ 489,
1869
+ 682
1870
+ ],
1871
+ "page_idx": 10
1872
+ },
1873
+ {
1874
+ "type": "text",
1875
+ "text": "Xiaodong Yu, Wenpeng Yin, and Dan Roth. 2022b. Pairwise representation learning for event coreference. In Proceedings of the 11th Joint Conference on Lexical and Computational Semantics, pages 69-78.",
1876
+ "bbox": [
1877
+ 114,
1878
+ 693,
1879
+ 489,
1880
+ 747
1881
+ ],
1882
+ "page_idx": 10
1883
+ },
1884
+ {
1885
+ "type": "text",
1886
+ "text": "Yutao Zeng, Xiaolong Jin, Saiping Guan, Jiafeng Guo, and Xueqi Cheng. 2020. Event coreference resolution with their paraphrases and argument-aware embeddings. In Proceedings of the 28th International Conference on Computational Linguistics, pages 3084-3094, Barcelona, Spain (Online). International Committee on Computational Linguistics.",
1887
+ "bbox": [
1888
+ 114,
1889
+ 756,
1890
+ 489,
1891
+ 848
1892
+ ],
1893
+ "page_idx": 10
1894
+ },
1895
+ {
1896
+ "type": "text",
1897
+ "text": "A Ablation Study of Global Attention",
1898
+ "text_level": 1,
1899
+ "bbox": [
1900
+ 114,
1901
+ 860,
1902
+ 455,
1903
+ 878
1904
+ ],
1905
+ "page_idx": 10
1906
+ },
1907
+ {
1908
+ "type": "text",
1909
+ "text": "Table 3 compares $\\mathsf{D}_{\\mathrm{long}}$ performance with and without Longformer global attention on the ECB+ and",
1910
+ "bbox": [
1911
+ 112,
1912
+ 887,
1913
+ 489,
1914
+ 919
1915
+ ],
1916
+ "page_idx": 10
1917
+ },
1918
+ {
1919
+ "type": "table",
1920
+ "img_path": "images/1022ca9b4a92d78609f7f2b88eedc9c14f0ad23a1195247b1f896cea483dc73b.jpg",
1921
+ "table_caption": [],
1922
+ "table_footnote": [
1923
+ "Table 3: Table showing the CoNLL F1 scores from the D Encoder with and without Longformer Global Attention on GVC and $\\mathrm{ECB + }$ dev sets."
1924
+ ],
1925
+ "table_body": "<table><tr><td>Features</td><td>ECB+</td><td>GVC</td></tr><tr><td>w/o global attn.</td><td>85.0</td><td>76.5</td></tr><tr><td>w/ global attn.</td><td>82.9</td><td>77.0</td></tr></table>",
1926
+ "bbox": [
1927
+ 556,
1928
+ 80,
1929
+ 836,
1930
+ 149
1931
+ ],
1932
+ "page_idx": 10
1933
+ },
1934
+ {
1935
+ "type": "text",
1936
+ "text": "GVC dev sets. This shows a dataset-specific contrast vis-à-vis sequence length where performance with global attention on GVC dev set is only marginally better than without, while the reverse is seen on the $\\mathrm{ECB + }$ dev set. More specifically, this suggests that perhaps the \"relevant\" or \"core\" context for ECR lies closer to the neighborhood of event lemmas (wrapped by trigger tokens) than the CLS tokens (that use global attention) in both corpora, albeit more so in $\\mathrm{ECB + }$ . As such, applying global attention to the CLS tokens here encodes more irrelevant context. Therefore, $\\mathsf{D}_{\\mathrm{long}}$ with Longformer global attention performs less well on $\\mathrm{ECB + }$ while being almost comparable to $\\mathsf{D}_{\\mathrm{long}}$ without global attention on GVC.",
1937
+ "bbox": [
1938
+ 507,
1939
+ 214,
1940
+ 884,
1941
+ 455
1942
+ ],
1943
+ "page_idx": 10
1944
+ },
1945
+ {
1946
+ "type": "text",
1947
+ "text": "B Full Results",
1948
+ "text_level": 1,
1949
+ "bbox": [
1950
+ 509,
1951
+ 467,
1952
+ 650,
1953
+ 482
1954
+ ],
1955
+ "page_idx": 10
1956
+ },
1957
+ {
1958
+ "type": "text",
1959
+ "text": "Table 4 shows complete results for all metrics from all models for within and cross-document coreference resolution on the GVC test set. Table 5 shows complete results for all metrics from all models on the $\\mathrm{ECB + }$ test set.",
1960
+ "bbox": [
1961
+ 507,
1962
+ 492,
1963
+ 882,
1964
+ 571
1965
+ ],
1966
+ "page_idx": 10
1967
+ },
1968
+ {
1969
+ "type": "text",
1970
+ "text": "C Qualitative Error Examples",
1971
+ "text_level": 1,
1972
+ "bbox": [
1973
+ 507,
1974
+ 583,
1975
+ 789,
1976
+ 602
1977
+ ],
1978
+ "page_idx": 10
1979
+ },
1980
+ {
1981
+ "type": "text",
1982
+ "text": "Table 6 presents an example of each type of error we identified in the output of our discriminator $(\\mathsf{D}_{\\mathrm{small}})$",
1983
+ "bbox": [
1984
+ 507,
1985
+ 609,
1986
+ 882,
1987
+ 659
1988
+ ],
1989
+ "page_idx": 10
1990
+ },
1991
+ {
1992
+ "type": "table",
1993
+ "img_path": "images/180dcee20501ce9a3f745e1b268e4dabb71be89b842dc56c63f23035987efe90.jpg",
1994
+ "table_caption": [],
1995
+ "table_footnote": [],
1996
+ "table_body": "<table><tr><td rowspan=\"2\"></td><td colspan=\"3\">MUC</td><td colspan=\"3\">B3</td><td colspan=\"3\">CEAFe</td><td colspan=\"3\">LEA</td><td>CoNLL</td></tr><tr><td>R</td><td>P</td><td>F1</td><td>R</td><td>P</td><td>F1</td><td>R</td><td>P</td><td>F1</td><td>R</td><td>P</td><td>F1</td><td>F1</td></tr><tr><td>Bugert et al. (2021)</td><td>78.1</td><td>66.3</td><td>71.7</td><td>73.6</td><td>49.9</td><td>59.5</td><td>38.2</td><td>60.9</td><td>47.0</td><td>56.5</td><td>38.2</td><td>45.6</td><td>59.4</td></tr><tr><td>Held et al. (2021)</td><td>91.8</td><td>91.2</td><td>91.5</td><td>82.2</td><td>83.8</td><td>83.0</td><td>75.5</td><td>77.9</td><td>76.7</td><td>79.0</td><td>82.3</td><td>80.6</td><td>83.7</td></tr><tr><td>LH</td><td>94.8</td><td>82.0</td><td>87.9</td><td>90.1</td><td>28.5</td><td>43.3</td><td>16.3</td><td>47.8</td><td>24.3</td><td>85.1</td><td>23.9</td><td>37.4</td><td>51.8</td></tr><tr><td>LHOra</td><td>95.2</td><td>82.3</td><td>88.3</td><td>91.2</td><td>29.1</td><td>44.1</td><td>18.6</td><td>54.7</td><td>27.8</td><td>86.4</td><td>24.9</td><td>38.6</td><td>53.4</td></tr><tr><td>LH + Dsmall</td><td>87.0</td><td>89.6</td><td>88.3</td><td>82.3</td><td>67.9</td><td>74.4</td><td>62.0</td><td>55.2</td><td>58.4</td><td>77.6</td><td>57.8</td><td>66.2</td><td>73.7</td></tr><tr><td>LHOra + Dsmall</td><td>89.1</td><td>90.2</td><td>89.6</td><td>85.0</td><td>68.0</td><td>75.6</td><td>62.7</td><td>59.6</td><td>61.1</td><td>80.6</td><td>59.5</td><td>68.5</td><td>75.4</td></tr><tr><td>LH + Dlong</td><td>84.0</td><td>91.1</td><td>87.4</td><td>79.0</td><td>76.4</td><td>77.7</td><td>69.6</td><td>52.5</td><td>59.9</td><td>74.1</td><td>63.9</td><td>68.6</td><td>75.0</td></tr><tr><td>LHOra + Dlong</td><td>84.9</td><td>91.4</td><td>88.0</td><td>80.4</td><td>77.4</td><td>78.9</td><td>70.5</td><td>54.3</td><td>61.3</td><td>75.7</td><td>65.5</td><td>70.2</td><td>76.1</td></tr></table>",
1997
+ "bbox": [
1998
+ 119,
1999
+ 158,
2000
+ 884,
2001
+ 329
2002
+ ],
2003
+ "page_idx": 11
2004
+ },
2005
+ {
2006
+ "type": "table",
2007
+ "img_path": "images/90b2f887ae41a64a786548997c29771d894fcf0ae97a7c90bfc7cf492f859413.jpg",
2008
+ "table_caption": [
2009
+ "Table 4: Results on within and cross-document event coreference resolution on GVC test set. Bolded F1 values indicate current or previous state of the art according to that metric as well as our best model."
2010
+ ],
2011
+ "table_footnote": [],
2012
+ "table_body": "<table><tr><td rowspan=\"2\"></td><td colspan=\"3\">MUC</td><td colspan=\"3\">B3</td><td colspan=\"3\">CEAFe</td><td colspan=\"3\">LEA</td><td>CoNLL</td></tr><tr><td>R</td><td>P</td><td>F1</td><td>R</td><td>P</td><td>F1</td><td>R</td><td>P</td><td>F1</td><td>R</td><td>P</td><td>F1</td><td>F1</td></tr><tr><td>Barhom et al. (2019)</td><td>78.1</td><td>84.0</td><td>80.9</td><td>76.8</td><td>86.1</td><td>81.2</td><td>79.6</td><td>73.3</td><td>76.3</td><td>64.6</td><td>72.3</td><td>68.3</td><td>79.5</td></tr><tr><td>Meged et al. (2020)</td><td>78.8</td><td>84.7</td><td>81.6</td><td>75.9</td><td>85.9</td><td>80.6</td><td>81.1</td><td>74.8</td><td>77.8</td><td>64.7</td><td>73.4</td><td>68.8</td><td>80.0</td></tr><tr><td>Cattan et al. (2021)</td><td>85.1</td><td>81.9</td><td>83.5</td><td>82.1</td><td>82.7</td><td>82.4</td><td>75.2</td><td>78.9</td><td>77.0</td><td>68.8</td><td>72.0</td><td>70.4</td><td>81.0</td></tr><tr><td>Zeng et al. (2020)</td><td>85.6</td><td>89.3</td><td>87.5</td><td>77.6</td><td>89.7</td><td>83.2</td><td>84.5</td><td>80.1</td><td>82.3</td><td>-</td><td>-</td><td>-</td><td>84.3</td></tr><tr><td>Yu et al. (2022b)</td><td>88.1</td><td>85.1</td><td>86.6</td><td>86.1</td><td>84.7</td><td>85.4</td><td>79.6</td><td>83.1</td><td>81.3</td><td>-</td><td>-</td><td>-</td><td>84.4</td></tr><tr><td>Allaway et al. (2021)</td><td>81.7</td><td>82.8</td><td>82.2</td><td>80.8</td><td>81.5</td><td>81.1</td><td>79.8</td><td>78.4</td><td>79.1</td><td>-</td><td>-</td><td>-</td><td>80.8</td></tr><tr><td>Caciularu et al. (2021)</td><td>87.1</td><td>89.2</td><td>88.1</td><td>84.9</td><td>87.9</td><td>86.4</td><td>83.3</td><td>81.2</td><td>82.2</td><td>76.7</td><td>77.2</td><td>76.9</td><td>85.6</td></tr><tr><td>Held et al. (2021)</td><td>87.0</td><td>88.1</td><td>87.5</td><td>85.6</td><td>87.7</td><td>86.6</td><td>80.3</td><td>85.8</td><td>82.9</td><td>74.9</td><td>73.2</td><td>74.0</td><td>85.7</td></tr><tr><td>LH</td><td>85.1</td><td>75.6</td><td>80.1</td><td>83.2</td><td>72.2</td><td>77.3</td><td>66.2</td><td>78.1</td><td>71.7</td><td>67.3</td><td>62.6</td><td>64.9</td><td>76.4</td></tr><tr><td>LHOra</td><td>99.1</td><td>79.6</td><td>88.3</td><td>97.9</td><td>67.7</td><td>80.0</td><td>65.9</td><td>93.7</td><td>77.4</td><td>85.1</td><td>63.8</td><td>72.9</td><td>81.9</td></tr><tr><td>LH + Dsmall</td><td>76.2</td><td>86.9</td><td>81.2</td><td>77.8</td><td>85.7</td><td>81.6</td><td>83.9</td><td>73.0</td><td>78.1</td><td>68.7</td><td>71.5</td><td>70.1</td><td>80.3</td></tr><tr><td>LHOra + Dsmall</td><td>89.8</td><td>87.6</td><td>88.7</td><td>90.7</td><td>80.2</td><td>85.1</td><td>82.5</td><td>85.1</td><td>83.8</td><td>83.3</td><td>72.2</td><td>77.3</td><td>85.9</td></tr><tr><td>LH + Dlong</td><td>80.0</td><td>87.3</td><td>83.5</td><td>79.6</td><td>85.4</td><td>82.4</td><td>83.1</td><td>75.5</td><td>79.1</td><td>70.5</td><td>73.3</td><td>71.9</td><td>81.7</td></tr><tr><td>LHOra + Dlong</td><td>93.7</td><td>87.9</td><td>90.7</td><td>94.1</td><td>79.6</td><td>86.3</td><td>81.6</td><td>88.7</td><td>85.0</td><td>86.8</td><td>73.2</td><td>79.4</td><td>87.4</td></tr></table>",
2013
+ "bbox": [
2014
+ 117,
2015
+ 527,
2016
+ 884,
2017
+ 785
2018
+ ],
2019
+ "page_idx": 11
2020
+ },
2021
+ {
2022
+ "type": "text",
2023
+ "text": "Table 5: Results on within and cross-document event coreference resolution on $\\mathrm{ECB + }$ test set with gold mentions and predicted topics. Bolded F1 values indicate current or previous state of the art according to that metric as well as our best model.",
2024
+ "bbox": [
2025
+ 112,
2026
+ 794,
2027
+ 882,
2028
+ 835
2029
+ ],
2030
+ "page_idx": 11
2031
+ },
2032
+ {
2033
+ "type": "table",
2034
+ "img_path": "images/361f77a772314ced485aab5b90f0a7c3b0930f2f2d292d053b3e1bcce2a80e60.jpg",
2035
+ "table_caption": [],
2036
+ "table_footnote": [
2037
+ "Table 6: Qualitative Analysis on the hard mention pairs incorrectly linked (or missed) by our Discriminator $(\\mathsf{D}_{\\mathrm{small}})$ in the $\\mathsf{ECB}+$ and GVC dev set: Underlined and bold-faced mentions surrounded by trigger tokens respectively indicate incorrect and missing assignments. Underlined spans without trigger tokens represents the category-specific quality being highlighted. The miscellaneous category (Misc.) refers to other errors including (reasonable) predictions that are either incorrect annotations in the gold data or incomplete gold sentences."
2038
+ ],
2039
+ "table_body": "<table><tr><td>Category</td><td>Snippet</td></tr><tr><td>Adversarial/Conflicting</td><td>British climber &lt;m&gt; dies &lt;/m&gt; in New Zealand fall....The first of the &lt;m&gt; deaths &lt;/m&gt; this weekend was that of a New Zealand climber who fell on Friday morning.</td></tr><tr><td>Adversarial/Conflicting</td><td>British climber &lt;m&gt; dies &lt;/m&gt; in New Zealand fall....Australian Ski Mountaineer &lt;m&gt;Dies&lt;/m&gt; in Fall in New Zealand.</td></tr><tr><td>Adversarial/Conflicting</td><td>..Prosecutor Kym Worthy announces charges against individuals involved in the gun violence &lt;m&gt; deaths &lt;/m&gt; of children in Detroit .... Grandparents charged in 5-year - old &#x27;s shooting &lt;m&gt; death &lt;/m&gt; Buy Photo Wayne County Prosecutor Kym Worthy announces charges against individuals involved in the gun violence deaths of children...</td></tr><tr><td>Pronoun Lemmas</td><td>This just does not happen in this area whatsoever . It &lt;/m&gt;’s just unreal , ” said neighbor Sheila Rawlins....This &lt;/m&gt; just does not happen in this area whatsoever . It ’s just unreal , ” said neighbor Sheila Rawlins .</td></tr><tr><td>Set-Member Relationship</td><td>On Friday , Chicago surpassed 700 &lt;m&gt; homicides &lt;/m&gt; so far this year . ....&lt;m&gt;Homicide &lt;/m&gt; Watch Chicago Javon Wilson , the teenage grandson of U.S. Rep. Danny Davis , was shot to death over what police called an arugment over sneakers in his Englewood home Friday evening .</td></tr><tr><td>Weak Temporal Reasoning</td><td>Police : in an unrelated &lt;m&gt; incident &lt;m&gt; a man was shot at 3:18 a.m. Saturday in North Toledo ....Toledo mother grieves 3-year - old ’s &lt;m&gt; shooting &lt;/m&gt; death | Judge sets bond at 580,000 USD for Toledo man accused of rape , kidnapping | Toledo man sentenced to 11 years in New Year ’s Day shooting</td></tr><tr><td>Incomplete, Short Context</td><td>Ellen DeGeneres to &lt;m&gt; Host &lt;/m&gt; Oscars....It will be her second &lt;m&gt; stint &lt;/m&gt; in the job , after hosting the 2007 ceremony and earning an Emmy nomination for it .</td></tr><tr><td>Similar context, Different event times</td><td>near Farmington Road around 9 p.m. There they found a 32-year - old unidentified man with a &lt;m&gt; gunshot &lt;/m&gt; wound outside of a home ....The family was driving about 8:26 p.m. Sunday in the 1100 block of South Commerce Street when &lt;m&gt; gunshots were fired &lt;/m&gt; from a dark sedan that began following their vehicle...</td></tr><tr><td>Same Lemma, Ambiguous Context</td><td>Police : Man Shot To Death In Stockton Related To 3-Year - Old &lt;m&gt; Killed &lt;/m&gt; By Stray Bullet 2 p.m. UPDATE : Stockton Police have identified the man shot and killed on ...Police : Man Shot To Death In Stockton Related To 3-Year - Old Killed By Stray Bullet 2 p.m. UPDATE : Stockton Police have identified the man shot and killed &lt;/m&gt; on Tuesday night.</td></tr><tr><td>Lexically different, Semantically same</td><td>One man is dead after being &lt;m&gt; shot &lt;/m&gt; by a gunman ....Employees at a Vancouver wholesaler were coping Saturday with the death of their boss , who was gunned down &lt;/m&gt; at their office Christmas party .</td></tr><tr><td>Misc.</td><td>Baton Rouge Police have charged 17-year - old Ahmad Antoine of Baton Rouge with Negligent Homicide in the city ’s latest shooting &lt;m&gt; death &lt;/m&gt; ....Tagged Baton Rouge , &lt;m&gt; homicide &lt;/m&gt;.</td></tr></table>",
2040
+ "bbox": [
2041
+ 115,
2042
+ 114,
2043
+ 882,
2044
+ 801
2045
+ ],
2046
+ "page_idx": 12
2047
+ }
2048
+ ]
2023/2_n is better than n2_ Decomposing Event Coreference Resolution into Two Tractable Problems/02c65ea8-73c1-4552-88f9-a2d5a7f7c433_model.json ADDED
The diff for this file is too large to render. See raw diff
 
2023/2_n is better than n2_ Decomposing Event Coreference Resolution into Two Tractable Problems/02c65ea8-73c1-4552-88f9-a2d5a7f7c433_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c8b3bfc68a9b037cafc8d9f707f4143a070cbcc5dce70d351d9885146989509a
3
+ size 935997
2023/2_n is better than n2_ Decomposing Event Coreference Resolution into Two Tractable Problems/full.md ADDED
@@ -0,0 +1,392 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # $2 * n$ is better than $n^2$ : Decomposing Event Coreference Resolution into Two Tractable Problems
2
+
3
+ Shafiuddin Rehan Ahmed<sup>1</sup> Abhijnan Nath<sup>2</sup> James H. Martin<sup>1</sup> Nikhil Krishnaswamy<sup>2</sup>
4
+
5
+ $^{1}$ Department of Computer Science, University of Colorado, Boulder, CO, USA {shah7567, james.martin}@colorado.edu
6
+
7
+ $^{2}$ Department of Computer Science, Colorado State University, Fort Collins, CO, USA {abhijnan.nath, nkrishna}@colostate.edu
8
+
9
+ # Abstract
10
+
11
+ Event Coreference Resolution (ECR) is the task of linking mentions of the same event either within or across documents. Most mention pairs are not coreferent, yet many that are coreferent can be identified through simple techniques such as lemma matching of the event triggers or the sentences in which they appear. Existing methods for training coreference systems sample from a largely skewed distribution, making it difficult for the algorithm to learn coreference beyond surface matching. Additionally, these methods are intractable because of the quadratic operations needed. To address these challenges, we break the problem of ECR into two parts: a) a heuristic to efficiently filter out a large number of non-coreferent pairs, and b) a training approach on a balanced set of coreferent and non-coreferent mention pairs. By following this approach, we show that we get comparable results to the state of the art on two popular ECR datasets while significantly reducing compute requirements. We also analyze the mention pairs that are "hard" to accurately classify as coreferent or non-coreferent<sup>1</sup>.
12
+
13
+ # 1 Introduction
14
+
15
+ Event coreference resolution (ECR) is the task of finding mentions of the same event within the same document (known as "within-document coreference resolution," or WDCR) or across text (known as "cross-document coreference resolution," or CDCR) documents. This task is used for knowledge graph construction, event salience detection and question answering (Postma et al., 2018).
16
+
17
+ Traditionally, ECR is performed on pairs of event mentions by calculating the similarity between them and subsequently using a clustering algorithm to identify ECR relations through transitivity. The pairwise similarity is estimated using a supervised machine learning method, where an
18
+
19
+ algorithm is trained to distinguish between positive and negative examples based on ground truth. The positive examples are all pairs of coreferent mentions, while the negative examples are all pairs of non-coreferent mentions. To avoid comparing completely unrelated events, the negative pairs are only selected from documents coming from the set of related topics.
20
+
21
+ Many coreferent pairs are similar on the surface, meaning that the event triggers (the words or phrases referring to the event) have the same lemma and appear in similar sentences. We can use these features in a heuristic to further classify the positive $(\mathrm{P}^{+})$ and negative $(\mathrm{P}^{-})$ pairs into four categories:
22
+
23
+ 1. $\mathbf{P}_{\mathrm{easy}}^{+}$ : coreferent/positive mention pairs with high surface similarity.
24
+ 2. $\mathrm{P_{FN}^{+}}$ : coreferent/positive mention pairs with low surface similarity.
25
+ 3. $\mathbf{P}_{\mathrm{hard}}^{-}$ : non-coreferent/negative mention pairs with high surface similarity.
26
+ 4. $\mathrm{P}_{\mathrm{TN}}^{-}$ : non-coreferent/negative mention pairs with low surface similarity
27
+
28
+ As shown in Figure 1, $\mathrm{P}_{\text {easy }}^{+}$ represents coreferent mention pairs that can be correctly identified by the heuristic, but $\mathrm{P}_{\text {hard }}^{-}$ are non-coreferent pairs that might be difficult for the heuristic to identify. Similarly, $\mathrm{P}_{\mathrm{TN}}^{-}$ (True Negatives) are non-coreferent pairs that the heuristic can correctly infer, but $\mathrm{P}_{\mathrm{FN}}^{+}$ (False Negatives) require additional reasoning (that Indianapolis is coreferent with Colts) to make the coreference judgement.
29
+
30
+ Most mention pairs are non-coreferent, comprising all pairs corresponding to $\mathrm{P}_{\mathrm{hard}}^{-}$ and $\mathrm{P}_{\mathrm{TN}}^{-}$ . However, we observe that the distribution of the three categories $(\mathrm{P}_{\mathrm{easy}}^{+}, \mathrm{P}_{\mathrm{hard}}^{-}$ , and $\mathrm{P}_{\mathrm{FN}}^{+})$ is fairly similar across most ECR datasets, with $\mathrm{P}_{\mathrm{TN}}^{-}$ causing the imbalance between positive and negative pairs. Previous methods do not differentiate between these four categories and randomly select
31
+
32
+ ![](images/9e0ff249f502983afe08e8f0e2b096d020cc1d1ed3aad0417037fb775a72c394.jpg)
33
+ Figure 1: In this approach, we use a lemma-based heuristic to identify coreference, or the relationship between two mentions in a text that refer to the same event. We compare the similarity between the event trigger, which is highlighted in bold and italic, and the lemmas, or base forms, of the sentences. The heuristic classifies the mention pairs "P $_{\text{easy}}^+$ " and "P $_{\text{hard}}^-$ " as coreferent, and "P $_{\text{FN}}^+$ " and "P $_{\text{TN}}^-$ " as not coreferent. "P $_{\text{easy}}^+$ " and "P $_{\text{TN}}^-$ " are correct predictions, meaning they are classified correctly as coreferent and not coreferent. "P $_{\text{hard}}^-$ " and "P $_{\text{FN}}^+$ " are incorrect predictions, meaning they are misclassified as coreferent and not coreferent.
34
+
35
+ the positive and negative pairs to train their coreference systems from this heavily skewed distribution. This makes it challenging for the coreference algorithm to identify coreferent links among a large number of non-coreferent ones. Furthermore, as ECR is performed on $n^2$ number of mention pairs, where $n$ is the number of mentions in the corpus, these methods can become intractable for a large corpus.
36
+
37
+ To improve the efficiency of the ECR process while achieving near sate of the art (SOTA) results, we divide the problem into two manageable subtasks: a) a heuristic to efficiently and accurately filter out a large number of $\mathrm{P}_{\mathrm{TN}}^{-}$ as a way of balancing the skewed distribution, and b) an ECR system trained on the balanced set of coreferent and noncoreferent mention pairs $(\mathrm{P}_{\mathrm{easy}}^{+}$ and $\mathrm{P}_{\mathrm{hard}}^{-}$ . This approach also eases the analysis of some of the mention pairs that are difficult to classify with an ECR system, which we present in this paper.
38
+
39
+ # 2 Related Work
40
+
41
+ Pre-Transformer Methods Pre-Transformer language model-related works in event coreference such as Kenyon-Dean et al. (2018) trained neural models with customized objective (loss) functions to generate richer representations of mentionpairs using "static" embeddings such as contextual Word2Vec (Mikolov et al., 2013) as well as document-level features such as TF-IDF and heuristically-motivated features like mentionrecency, word overlap, and lemma overlap, etc. As such, they improved upon the baselines established by Cybulska and Vossen (2015) on the $\mathrm{ECB + }$ corpus. Similarly, works such as Barhom et al. (2019) suggest both disjoint and joint-clustering of events
42
+
43
+ mentions with their related entity clusters by using a predicate-argument structure. In this, their disjoint model surpassed Kenyon-Dean et al. (2018) by 9.5 F1 points using the CoNLL scorer (Pradhan et al., 2014) whereas their joint model improved upon the disjoint model by 1.2 points for entities and 1 point for events.
44
+
45
+ Transformer-based Cross-encoding Most recent works (Meged et al., 2020; Zeng et al., 2020; Cattan et al., 2021; Allaway et al., 2021; Caciularu et al., 2021; Held et al., 2021; Yu et al., 2022a) in CDCR have shown success in using pairwise mention representation learning models, a method popularly known as cross-encoding. These methods use distributed and contextually-enriched "nonstatic" vector representations of mentions from large, Transformer-based language models like various BERT-variants to calculate supervised pairwise scores for those event mentions. At inference, such works use variations of incremental or agglomerative clustering techniques to form predicted coreference links and evaluate their chains on gold coreference standards. The methods vary with the context they use for cross-encoding. Cattan et al. (2021) use only sentence-level context, Held et al. (2021) use context from sentences surrounding the mentions, and Caciularu et al. (2021) use context from entire documents.
46
+
47
+ In our research, we have focused on the CDLM model from Caciularu et al. (2021) and their methodology, which uses a combination of enhanced pretraining using the global attention mechanism inspired by Beltagy et al. (2020) as well as finetuning on a task-specific dataset using pretrained special tokens to generate more semantically-enhanced embeddings for mentions.
48
+
49
+ Beltagy et al. (2020) and Caciularu et al. (2021) cleverly use the global attention mechanism to linearly scale the oft-quadratic complexity of pairwise scoring of mentions in coreference resolution while also accommodating longer documents (up to 4,096 tokens). Previous works such as Baldwin (1997), Stoyanov and Eisner (2012), Lee et al. (2012), and Lee et al. (2013) also reduce computation time by strategically using deterministic, rule-based systems along with neural architectures.
50
+
51
+ Recently, pruning $\mathrm{P_{TN}^{-}}$ for ECR has been shown to be effective by Held et al. (2021). They create individual representations for mentions and use them in a bi-encoder method to retrieve potential coreferent candidates, which are later refined using a cross-encoder trained on hard negative examples. In contrast, our approach utilizes a computationally efficient pruning heuristic and trains the cross-encoder on a smaller dataset. We also conduct an error analysis on all hard examples that are misclassified by the cross-encoder, which is made feasible by the heuristic.
52
+
53
+ # 3 Datasets
54
+
55
+ We experiment with two popular ECR datasets distinguished by the effectiveness of a lemma heuristic on the dataset.
56
+
57
+ # 3.1 Event Coreference Bank Plus (ECB+)
58
+
59
+ The $\mathrm{ECB + }$ corpus (Cybulska and Vossen, 2014) is a popular English corpus used to train and evaluate systems for event coreference resolution. It extends the Event Coref Bank corpus (ECB; Bejan and Harabagiu (2010)), with annotations from around 500 additional documents. The corpus includes annotations of text spans that represent events, as well as information about how those events are related through coreference. We divide the documents from topics 1 to 35 into the training and validation sets $^2$ , and those from 36 to 45 into the test set, following the approach of Cybulska and Vossen (2015).
60
+
61
+ # 3.2 Gun Violence Corpus (GVC)
62
+
63
+ The Gun Violence Corpus (Vossen et al., 2018) is a recent English corpus exclusively focusing on event coreference resolution. It is intended to be a more challenging dataset than $\mathrm{ECB + }$ which has a very strong lemma baseline (Cybulska and Vossen, 2014). It is a collection of texts surrounding a
64
+
65
+ <table><tr><td rowspan="2"></td><td colspan="3">ECB+</td><td colspan="3">GVC</td></tr><tr><td>Train</td><td>Dev</td><td>Test</td><td>Train</td><td>Dev</td><td>Test</td></tr><tr><td>T/ST</td><td>25</td><td>8</td><td>10/20</td><td>1/170</td><td>1/37</td><td>1/34</td></tr><tr><td>D</td><td>594</td><td>196</td><td>206</td><td>358</td><td>78</td><td>74</td></tr><tr><td>M</td><td>3808</td><td>1245</td><td>1780</td><td>5313</td><td>977</td><td>1008</td></tr><tr><td>C</td><td>1464</td><td>409</td><td>805</td><td>991</td><td>228</td><td>194</td></tr><tr><td>S</td><td>1053</td><td>280</td><td>623</td><td>252</td><td>70</td><td>43</td></tr></table>
66
+
67
+ Table 1: ECB+ and GVC Corpus statistics for event mentions. T/ST = topics/sub-topics, D = documents, M = event mentions, C = clusters, S = singletons.
68
+
69
+ single topic (gun violence) and various sub-topics. Since it does not have coreference links across sub-topics, we only consider mention pairs within the sub-topics. We use the data split by Bugert et al. (2021). Table 1 contains the statistics for $\mathrm{ECB + }$ and GVC corpora.
70
+
71
+ # 4 System Overview
72
+
73
+ There are two major components in our system: the heuristic and the discriminator (cross-encoder) trained on the output of the heuristic.
74
+
75
+ # 4.1 Lemma Heuristics (LH, LHOra)
76
+
77
+ A key feature of ECR is its high baseline achieved by comparing the lemmas of mention triggers and sentences. To leverage this feature, we incorporate it as the first step in our coreference resolution system. We utilize $\mathsf{spaCy}^3$ to extract the lemmas, a widely-used tool for this task. In addition to matching lemmas of triggers, we also create and utilize a set of synonymous<sup>4</sup> lemma pairs that commonly appear in coreferent mention pairs in our training set. This approach allows us to identify coreferent mention pairs that have different triggers and improve the overall recall. The heuristic, LH, only utilizes the synonymous lemma pairs from the training set. We also evaluate the performance of $\mathsf{LH}_{\mathsf{Ora}}$ , which uses synonymous lemma pairs from the entire dataset which means it uses the coreference information of the development and test sets to create synonymous lemma pairs.
78
+
79
+ For a mention pair (A, B), with triggers $(t_A, t_B)$ , head lemmas $(l_A, l_B)$ and for a given synonymous lemma pair set $(\mathrm{Syn}_{\mathbb{P}})$ , we consider only lemma pairs that pass any of the following rules:
80
+
81
+ - $(l_A, l_B) \in \mathrm{Syn}_{\mathbb{P}}$
82
+ $l_{A} = l_{B}$
83
+ - $t_B$ contains $l_A$
84
+
85
+ 3https://spacy.io/ model en_core_web_md v3.4
86
+
87
+ 4The words need not be synonyms in strict definitions, but rather appear in coreference chains.
88
+
89
+ ![](images/9878c5dd41e62deddda9a6f71cf8b3aa7ee051d28beed08fb94183eedcd6ee49.jpg)
90
+ Figure 2: Coreferent vs. non-coreferent mention pairs ratio across datasets.
91
+
92
+ # $t_A$ contains $l_B$
93
+
94
+ For mentions that have matching trigger lemmas/triggers or are synonymous, we proceed by comparing the context of the mentions. In this work, we only compare the mention's sentence to check for similarities between two mentions. To further refine our comparison, we remove stop words and convert the tokens in the text to their base form. Then, we determine the overlap between the two mentions and predict that the pair is coreferent if the overlap exceeds a certain threshold. We tune the threshold using the development sets.
95
+
96
+ # 4.1.1 Filtering out $\mathbf{P}_{\mathrm{TN}}^{\mathrm{r}}$
97
+
98
+ Cross-document coreference systems often struggle with a skewed distribution of mention pairs, as seen in Figure 2. In any dataset, only $5 - 10\%$ of the pairs are corefering, while the remaining $90\%$ are non-coreferent. To address this, we use the heuristic to balance the distribution by selectively removing non-coreferent pairs $\left(\mathrm{P}_{\mathrm{TN}}^{-}\right)$ , while minimizing the loss of coreferent pairs $\left(\mathrm{P}_{\mathrm{FN}}^{+}\right)$ . We do this by only considering the mention pairs that the heuristic predicts as coreferent, and discarding the non-coreferent ones.
99
+
100
+ # 4.1.2 $\mathbf{P}_{\mathrm{hard}}^{-},\mathbf{P}_{\mathrm{easy}}^{+}$ and $\mathbf{P}_{\mathrm{FN}}^{+}$ Analysis
101
+
102
+ $\mathbf{P}_{\mathrm{easy}}^{+}$ and $\mathbf{P}_{\mathrm{hard}}^{-}$ : As defined earlier, $\mathbf{P}_{\mathrm{easy}}^{+}$ are the mention pairs that the heuristic correctly predicts as coreferent when compared to the ground-truth, and $\mathbf{P}_{\mathrm{hard}}^{-}$ are the heuristic's predictions of coreference that are incorrect when compared to the ground-truth. In §4.2.1, we go through how we fix heuristic's $\mathbf{P}_{\mathrm{hard}}^{-}$ predictions while minimizing the errors introduced in terms of $\mathbf{P}_{\mathrm{easy}}^{+}$ .
103
+
104
+ $\mathbf{P}_{\mathrm{FN}}^{+}$ : We define a pair as a $\mathrm{P}_{\mathrm{FN}}^{+}$ only if it cannot be linked to the true cluster through subsequent steps.
105
+
106
+ ![](images/044e65281cd9e35d29dd2b480b601b1d42b22fbc62d2ba4b4c16dd883c34761a.jpg)
107
+ Figure 3: Counting size of mention pairs $\mathrm{(P_{FN}^{+}}$ and $\mathrm{P}_{\mathrm{easy}}^{+}$ in a true cluster $\{\mathbf{a},\mathbf{b},\mathbf{c}\}$ using heuristic's coreferent predictions (solid line) and non-coreferent predictions (dotted line). We count $\mathrm{P_{FN}^{+}}$ after performing transitive closure, resulting in a size of 0 (instead of 1) in (2).
108
+
109
+ As shown in Figure 3, if a true cluster is $\{a, b, c\}$ and the heuristic discards one pair $(a, c)$ , it will not be considered as a $\mathrm{P}_{\mathrm{FN}}^{+}$ because the coreference can be inferred through transitivity. However, if it discards two pairs $\{(a, c), (b, c)\}$ , they will both be considered as $\mathrm{P}_{\mathrm{FN}}^{+}$ . We hypothesize that an ideal heuristic is one that maintains a balance between $\mathrm{P}_{\mathrm{easy}}^{+}$ and $\mathrm{P}_{\mathrm{hard}}^{-}$ while minimizing $\mathrm{P}_{\mathrm{FN}}^{+}$ , and therefore, we tune the heuristic's threshold accordingly using the development sets of the corpora.
110
+
111
+ We evaluate the heuristics LH and $\mathrm{LH}_{\mathrm{Ora}}$ by plotting the distributions $\mathrm{P}_{\mathrm{easy}}^{+}$ , $\mathrm{P}_{\mathrm{hard}}^{-}$ , and $\mathrm{P}_{\mathrm{FN}}^{+}$ generated by each for the two corpora. From Figure 4, We observe similar distributions for the test and development sets with the chosen threshold value from the development set. We also observe that LH causes a significant number of $\mathrm{P}_{\mathrm{FN}}^{+}$ , while $\mathrm{LH}_{\mathrm{Ora}}$ has a minimal number of $\mathrm{P}_{\mathrm{FN}}^{+}$ . Minimizing the count of $\mathrm{P}_{\mathrm{FN}}^{+}$ is important as it directly affects
112
+
113
+ ![](images/c5e006ebd082aa3d824fe189830e8d7ca2cf22c9bae0159c388c7bcba48a0457.jpg)
114
+ Figure 4: LH and $\mathsf{LH}_{\mathsf{Ora}}$ Distributions of $\mathrm{P}_{\mathrm{hard}}^{-}$ , $\mathrm{P}_{\mathrm{easy}}^{+}$ and $\mathrm{P_{FN}^{+}}$ for ECB+ and GVC corpora. $\mathsf{LH}_{\mathsf{Ora}}$ ensures no (or negligible) loss in $\mathbf{P}_{\mathbf{FN}}^{+}$ .
115
+
116
+ ![](images/7ebf31c5a73012f5696cba402b3718344b806a02f81d5562ce4be313a8402fb4.jpg)
117
+ Figure 5: The cross-encoding technique to generate the coreference score between the mention pair (A, B). This involves adding special tokens, $\langle m \rangle$ and $\langle \langle m \rangle \rangle$ , around the event triggers, and then combining and processing the two mentions through a transformer-based language model. Certain outputs of the transformer (ECLS, EA, EB) are then concatenated and fed into a classifier, which produces a score between 0 and 1 indicating the degree of coreference between the two mentions.
118
+
119
+ the system's recall. The distributions of $\mathrm{P}_{\mathrm{easy}}^{+}$ and $\mathrm{P}_{\mathrm{hard}}^{-}$ remain balanced across all datasets except when $\mathrm{LH}_{\mathrm{Ora}}$ is used in GVC where there are double the number of $\mathrm{P}_{\mathrm{hard}}^{-}$ to $\mathrm{P}_{\mathrm{easy}}^{+}$ . $\mathrm{P}_{\mathrm{hard}}^{-}$ should be minimized as it can affect the system's overall precision.
120
+
121
+ # 4.2 Cross-Encoder
122
+
123
+ A common technique to perform ECR is to use Transformer-based cross-encoding (CE) on the mention pair (A, B). This process, depicted in Figure 5, begins by surrounding the trigger with special tokens $(<m>$ and $</m>)$ . The mentions are then combined into a single input for the transformer (e.g., RoBERTa). The pooled output of the transformer (ECLS) and the output corresponding to the tokens of the event triggers $(\mathrm{E}_{\mathrm{A}}$ and $\mathrm{E_B}$ ) are extracted. $^5\mathrm{E_{CLS}}$ , $\mathrm{E_A}$ , $\mathrm{E_B}$ , and the element-wise product of the mention embeddings $(\mathrm{E}_{\mathrm{A}}\odot \mathrm{E}_{\mathrm{B}})$ are all concatenated to create a unified representation of the mention pair. This representation is used, with a classifier, to learn the coreference score, CE (A, B), between the pair after finetuning the transformer.
124
+
125
+ # 4.2.1 $\mathbf{P}_{\mathrm{easy}}^{+}$ & $\mathbf{P}_{\mathrm{hard}}^{-}$ Discriminator (D)
126
+
127
+ The cross-encoder's encoding is non-symmetric, meaning, depending on the order in which the mentions are concatenated, it will give different coreference scores. In reality, the order should not matter for predicting if the two events are the same or not. We propose a symmetric cross-encoding scorer where we take the average of the scores predicted from both combinations of concatenation. So for a mention pair, $p = (\mathrm{A},\mathrm{B})$ , the symmetric cross-encoder coreference scorer (D) is given as:
128
+
129
+ $$
130
+ \mathrm {D} (p) = \frac {\mathrm {C E} (\mathrm {A} , \mathrm {B}) + \mathrm {C E} (\mathrm {B} , \mathrm {A})}{2} \tag {1}
131
+ $$
132
+
133
+ We employ a cross-encoder with a symmetric scorer, as outlined in Equation 1, as the discriminator for $\mathrm{P}_{\mathrm{easy}}^{+}$ and $\mathrm{P}_{\mathrm{hard}}^{-}$ . We conduct experiments utilizing two different Transformer models, RoBERTa (Dsmall) and Longformer (Dlong), which vary in their maximum input capacity.
134
+
135
+ # 5 Experimental Setup
136
+
137
+ We describe our process of training, prediction, and hyperparameter choice in this section.
138
+
139
+ # 5.1 Mention Pair Generation
140
+
141
+ We use the gold mentions from the datasets. Following previous methods, we generate all the pairs $(\mathrm{P_{all}})$ of mentions $(M^v)$ from documents coming from the same topic. We use gold topics in the training phase and predicted topics through document clustering in the prediction phase (Bugert et al., 2021).
142
+
143
+ # 5.2 Training Phase
144
+
145
+ During the training phase, we leverage LH to generate a balanced set of positive and negative samples, labeled as $\mathrm{P}_{\mathrm{easy}}^{+}$ and $\mathrm{P}_{\mathrm{hard}}^{-}$ , respectively. These samples are then used to train our models, $\mathrm{D}_{\mathrm{small}}$ and $\mathrm{D}_{\mathrm{long}}$ separately, using the Binary Cross Entropy Loss (BCE) function as follows:
146
+
147
+ $$
148
+ L = \sum_{\substack{p_{+}\in \mathrm{P}_{\text{easy}}^{+},\\ p_{-}\in \mathrm{P}_{\text{hard}}^{-}}}\log \mathrm{D}(p_{+}) + \log \left(1 - \mathrm{D}(p_{-})\right)
149
+ $$
150
+
151
+ Unlike traditional methods, we do not rely on random sampling or artificial balancing of the dataset. Instead, our heuristic ensures that the positive and negative samples are naturally balanced (as depicted in Figure 6). A side-effect of adopting this approach is that some of the positive samples are
152
+
153
+ # Algorithm 1 Training Phase
154
+
155
+ Require: $D$ : training document set
156
+
157
+ $T$ : gold topics
158
+
159
+ $M^v$ : gold event mentions in $D$
160
+
161
+ $S^v$ : sentences of the mentions
162
+
163
+ $D^v$ : documents of the mentions
164
+
165
+ $G$ : gold mention cluster map
166
+
167
+ $P\gets$ TopicMentionPairs $(M^v,T)$
168
+
169
+ $\mathrm{Syn}_{\mathbb{P}} \gets \text{SynonymousLemmaPairs}(P, G)$
170
+
171
+ $\mathbf{P}_{\mathrm{easy}}^{+}, \mathbf{P}_{\mathrm{hard}}^{-}, \mathbf{P}_{\mathrm{FN}}^{+}, \mathbf{P}_{\mathrm{TN}}^{-} \gets \mathrm{LH}(P, G, \mathrm{Syn}_{\mathbf{P}}, S^{v})$
172
+
173
+ $\mathsf{D}_{\mathrm{long}} \gets \mathsf{TrainCrossEncoder}(\mathsf{P}_{\mathrm{easy}}^{+}, \mathsf{P}_{\mathrm{hard}}^{-}, D^{v})$
174
+
175
+ $\mathsf{D}_{\mathrm{small}} \gets \mathsf{TrainCrossEncoder}(\mathbf{P}_{\mathrm{easy}}^{\mathrm{p}}, \mathbf{P}_{\mathrm{hard}}^{\mathrm{p}}, S^{v})$
176
+
177
+ return $\mathrm{Syn}_{\mathrm{P}}, \mathrm{D}_{\mathrm{long}}, \mathrm{D}_{\mathrm{small}}$
178
+
179
+ excluded in training. We do this to keep the training and prediction phases consistent and, to ensure the cross-encoder is not confused by the inclusion of these hard positive examples.
180
+
181
+ Additionally, for D with Longformer, we utilize the entire document for training, while for D with RoBERTa, we only use the sentence containing the mention to provide contextual information. We employ the Adam optimizer with a learning rate of 0.0001 for the classifier and 0.00001 for fine-tuning the Transformer model. This entire process is illustrated in Algorithm 1.
182
+
183
+ To ensure optimal performance, we train our system separately for both the $\mathrm{ECB + }$ and GVC training sets. We utilize a single NVIDIA A100 GPU
184
+
185
+ ![](images/7231456e8ebf517a9b3307d207136f8703102dabbb7f73dc12c72bd6892eaf5d.jpg)
186
+
187
+ ![](images/85879c8c0f3e39c710bebb23033d8cf7da70b7f7eb7015ec55978e2632877ba4.jpg)
188
+
189
+ ![](images/68725d14490616c1977c09cb93d6714468170e34cc41c9c6329510c78b04a646.jpg)
190
+
191
+ ![](images/706be8391a3493a91c96af1d018d8797ba173f7eea810c16bc07241e7d9bc55e.jpg)
192
+
193
+ ![](images/f33dbd0338c941eef83b7523c27eaa228c93725844f657810f9567caa8c54909.jpg)
194
+ positive
195
+
196
+ ![](images/d18353d9aa4ef3619b7ca2c64bc704e0354ecabccb0f349a1a6fbf8d1c506ff7.jpg)
197
+ negative
198
+ Figure 6: Training Samples of previous methods vs. ours. The heuristic creates a balanced and significantly smaller training set for $\mathrm{ECB + }$ . For GVC, the heuristic discards half of the negative samples while somewhat balancing the dataset.
199
+
200
+ # Algorithm 2 Prediction Phase
201
+
202
+ Require: $D$ : testing document set
203
+
204
+ $T$ : gold/clustered topics
205
+
206
+ $M^v$ : gold event mentions in $D$
207
+
208
+ $S^v$ : sentences of the mentions
209
+
210
+ $\mathrm{Syn_p}$ : synonymous lemma pairs from training
211
+
212
+ $\mathsf{D}_{\mathrm{small}}, \mathsf{D}_{\mathrm{long}}$ : trained CE discriminators
213
+
214
+ $P\gets$ TopicMentionPairs $(M^v,T)$
215
+
216
+ $\mathrm{A_H,P^+}\leftarrow \mathrm{LH}(P,\mathrm{Syn_P},S^v)$
217
+
218
+ $\mathrm{A_P}\leftarrow \mathsf{D}_{\mathrm{small}}(\mathbf{P}^{+}) > 0.5$
219
+
220
+ $\mathrm{A_P}\leftarrow \mathsf{D_{long}}(\mathbb{P}^+) > 0.5$
221
+
222
+ return ConnectedComponents $(\mathrm{AH})$
223
+
224
+ ConnectedComponents $(\mathrm{A_P})$
225
+
226
+ with 80GB memory to train $\mathsf{D}_{\mathrm{long}}$ with the Longformer model, and a single NVIDIA RTX 3090 GPU (24 GB) for training $\mathsf{D}_{\mathrm{small}}$ with the RoBERTa-BASE model. We train each system for 10 epochs, with each epoch taking approximately one hour for the Longformer model and 15 minutes for the RoBERTa model.
227
+
228
+ # 5.3 Prediction Phase
229
+
230
+ In the prediction phase, we first pass the mention pairs through the heuristic and create an adjacency matrix called $\mathrm{A_H}$ based on its coreferent predictions. The ones predicted not coreferent by the heuristic are discarded. This step is crucial in terms of making the task tractable. Next, we pass the mention pairs that are predicted to be coreferent by the heuristic through $\mathrm{D_{small}}$ and $\mathrm{D_{long}}$ separately. Using the subsequent coreferent predictions from these models, we generate another adjacency matrix $\mathrm{A_P}$ . To create event clusters, we use these matrices to identify connected components.
231
+
232
+ As a baseline, we use the matrix $\mathrm{A_H}$ to generate the clusters. We then use $\mathrm{A_P}$ to assess the improvements made by using $\mathrm{D_{small}}$ and $\mathrm{D_{long}}$ over the baseline. This process is illustrated in Algorithm 2. The process takes between 6-10 minutes to run the Longformer model and between 1-2 minutes to run the RoBERTa one.
233
+
234
+ # 6 Results
235
+
236
+ We evaluate the event clusters formed using the standard coreference evaluation metrics (MUC, $B^{3}$ , $CEAF_{e}$ , LEA and CoNLL F1—the average of MUC, $B^{3}$ and $CEAF_{e}$ Vilain et al. (1995); Bagga and Baldwin (1998); Luo (2005); Luo et al. (2014); Pradhan et al. (2014); Moosavi et al. (2019)). We
237
+
238
+ <table><tr><td rowspan="2">Methods</td><td colspan="2">CoNLL F1</td></tr><tr><td>ECB+</td><td>GVC</td></tr><tr><td>Bugert et al. (2021)</td><td>-</td><td>59.4</td></tr><tr><td>Cattan et al. (2021)</td><td>81.0</td><td>-</td></tr><tr><td>Caciularu et al. (2021)</td><td>85.6</td><td>-</td></tr><tr><td>Held et al. (2021)</td><td>85.7</td><td>83.7</td></tr><tr><td>LH</td><td>76.4</td><td>51.8</td></tr><tr><td>LH + Dsmall</td><td>80.3</td><td>73.7</td></tr><tr><td>LH + Dlong</td><td>81.7</td><td>75.0</td></tr><tr><td>LHOra</td><td>81.9</td><td>53.4</td></tr><tr><td>LHOra + Dsmall</td><td>85.9</td><td>75.4</td></tr><tr><td>LHOra + Dlong</td><td>87.4</td><td>76.1</td></tr></table>
239
+
240
+ Table 2: Results on within and cross-document event coreference resolution on $\mathrm{ECB + }$ and GVC test sets.
241
+
242
+ run the baseline results (LH and $\mathrm{LH_{Ora}}$ ) and the combination of each heuristic with the two discriminators $(\mathrm{LH} / \mathrm{LH_{Ora}} + \mathrm{D_{small}} / \mathrm{D_{long}})$ . We compare to previous methods for $\mathrm{ECB+}$ and GVC as shown in Table 2. Bold indicates current or previous SOTA and our best model.
243
+
244
+ CoNLL F1 scores show that LH and $\mathrm{LH_{Ora}}$ are strong baselines for the $\mathrm{ECB + }$ corpus, where $\mathrm{LH_{Ora}}$ surpasses some of the previous best methods. From this, we can say that making improvements in the heuristic by better methods of finding synonymous lemma pairs is a viable solution for tackling $\mathrm{ECB + }$ with a heuristic. However, the heuristics fall short for GVC, where $\mathrm{LH_{Ora}}$ is only marginally better than LH. This may be due to the lower variation in lemmas in the GVC corpus. We hypothesize methods that can automatically detect synonymous lemma pairs will not be beneficial for GVC, and LH itself is sufficient as a heuristic here.
245
+
246
+ The discriminators consistently make significant improvements over the heuristics across both datasets. For $\mathrm{ECB + }$ $\mathsf{D}_{\mathrm{long}}$ is nearly 2 points better than $\mathsf{D}_{\mathrm{small}}$ in terms of the CoNLL measure. Both $\mathsf{D}_{\mathrm{small}}$ and $\mathsf{D}_{\mathrm{long}}$ when coupled with $\mathsf{LH}_{\mathrm{Ora}}$ surpass the state of the art for this dataset. $\mathsf{LH} + \mathsf{D}_{\mathrm{long}}$ beats Cattan et al. (2021) but falls short of SOTA, albeit by only 4 points. On GVC, both fall short of SOTA (Held et al., 2021) by only 8-9 points on CoNLL F1, with substantially fewer computations. In terms of computational cost-to-performance ratio, as we elaborate in §7.1, our methods outperform all the previous methods.
247
+
248
+ For ECR, where context is key, we would expect better performance from encoders with longer context. $\mathsf{D}_{\mathrm{long}}$ and $\mathsf{D}_{\mathrm{small}}$ show this trend for both
249
+
250
+ ![](images/5fdf4f5b8c80380ee5a0d808c46d90892a38e95187a9fefa7a6e2b9f87e4d9e5.jpg)
251
+ Figure 7: Prediction Phase Time Complexity in terms of Mention Pair Encoding.
252
+
253
+ ECB+ and GVC datasets. However, the gain we get from using the entire document is not substantial for the amount of additional computation required. An interesting line of future work would to automatically detect the core sections in the document that contribute to coreference and then only use that as context for ECR.
254
+
255
+ # 7 Discussion
256
+
257
+ # 7.1 Time Complexity Analysis
258
+
259
+ The heuristic is a very fast process that scales linearly with the number of mentions in a corpus. Specifically, by hashing the lemma pairs and sentence token lemmas, this step performs linear comparisons of mention pairs at prediction. The mention pair cross-encoding with Transformer is a computationally intensive process. A method that encodes all mention pairs in a large corpus can become intractable. Our method, however, is linear in complexity with the number of mentions, as shown in Figure 7, and outperforms previous methods in terms of computational efficiency. While Held et al. (2021)'s cross-encoding at prediction is linear $(5^{*}\mathfrak{n})$ , their pruning step is quadratic. They rely additionally on training a bi-encoder and a mention neighborhood detector step that requires GPUs.
260
+
261
+ # 7.2 Synonymous Lemma Pairs
262
+
263
+ We have established an upper limit for ECR using the $\mathrm{LH}_{\mathrm{Ora}} + \mathrm{D}_{\mathrm{long}}$ method for $\mathrm{ECB+}$ . Previous methods such as Held et al. (2021), use an oracle coreference scorer after their pruning step. In other words, their oracle assumption involves using a perfect cross-encoder. In contrast, we only use the oracle for pruning by assuming a perfect set of synonymous lemma pairs. This means that
264
+
265
+ improved pruning methods can lead to better ECR performance. We believe that it is possible to create a more effective synonymous pair detector than $\mathrm{LH}_{\mathrm{Ora}}$ by adopting recent work on predicate class detection (Brown et al., 2014, 2022) that use VerbNet (Schuler, 2005). In future research, we aim to enhance the process of generating synonymous pairs through the use of cross-encoding or additional steps such as word sense disambiguation with the Proposition Bank (Palmer et al., 2005; Pradhan et al., 2022). Identifying the sense of the trigger will help refine the lemma pairs that appear in coreference chains. Additionally, annotating the sense of the trigger is a straightforward process that can be easily incorporated into annotation procedures for new datasets, which is more efficient than coreference annotations.
266
+
267
+ # 7.3 Qualitative Error Analysis
268
+
269
+ We carry out a comprehensive analysis on errors the discriminator makes after the heuristic's predictions. Unlike previous methods (Barhom et al., 2019) where they sample a subset of mentions to carry out the error analysis, we do so for the entire dataset. By efficiently discarding the large number of $\mathrm{P}_{\mathrm{TN}}^{-}$ , we are able to isolate the shortcomings of the crossencoder, analyze them and offer solutions. Table 6 in Appendix C lists the various kinds of errors (incorrect and missing links) made by $\mathrm{D}_{\mathrm{small}}$ on the $\mathrm{ECB}+$ and GVC dev sets.
270
+
271
+ We find error categories like same-sentence pronouns, weak temporal reasoning, ambiguity due to corefering entities, misleading lexical similarity, and missed set-member coreferent links. Table 6 in the appendix presents examples of each.
272
+
273
+ Incorrect links due to same-sentence pronouns like "it" and "this" can be avoided by refining the heuristics-based mention-pair generation process to exclude same-sentence pronouns. Similarly, ambiguous temporal contexts like "Saturday" and "New Year's Day" that refer to the day of occurrence of the same event in articles published on different dates can be resolved by leveraging more temporal context/metadata where available. Also, errors in lexically-different but semantically similar event mention lemmas can be reduced by leveraging more-enriched contextual representations.
274
+
275
+ By using the Oracle for pruning, we can focus on where $\mathsf{D}_{\mathrm{small}}$ falls short in terms of false positives. We first sort the final event clusters based on purity (number of non-coreferent links within the cluster compared to ground truth). Next, we identify
276
+
277
+ pairs that the discriminator incorrectly predicted to be coreferent within these clusters, specifically focusing on highly impure clusters. We look for these pairs in highly impure clusters and analyze the mention sentences. Our findings are as follows:
278
+
279
+ - Problems caused when two big clusters are joined through very similar (almost adversarial) examples, e.g., "British hiker" vs. "New Zealand hiker." This error can be fixed by performing an additional level of clustering, such as, K-means.
280
+ - Problems with set-member relations, such as "shootings" being grouped with specific "shooting" events. The sets often include many non-coreferent member events. To address this issue, we can identify whether an event is plural or singular prior to coreference resolution.
281
+ - Contrary to the notion that singleton mentions cause the most errors, we found that singletons appear in the least impure clusters. This means the cross-encoder discriminator is good in separating out singletons.
282
+
283
+ # 8 Conclusion & Future work
284
+
285
+ We showed that a simple heuristic paired with a crossencoder does comparable ECR to more complicated methods while being computationally efficient. We set a upper bound for the performance on $\mathrm{ECB + }$ suggesting improvement with better synonyms pairs detection we can achieve better results. Through extensive error analysis, we presented the shortcomings of the crossencoder in this task and suggested ways to improve it.
286
+
287
+ Future research directions include applying our method to the more challenging task of cross-subtopic event coreference (e.g., FCC (Bugert et al., 2020)) where scalability and compute-efficiency are crucial metrics, making the current heuristic-based mention pair generation process "learnable" using an auxiliary cross-encoder, and incorporating word-sense disambiguation and lemma-pair annotations into the pipeline to resolve lexical ambiguity. An exciting direction for future work made tractable by our work is to incorporate additional cross-encoding features into the pipeline, especially using the latest advancements in visual transformers (Dosovitskiy et al., 2021; Bao et al., 2021; Liu et al., 2021; Radford et al., 2021). Another important direction is to test our method on languages with a richer morphology than English.
288
+
289
+ # Limitations
290
+
291
+ The most evident limitation of this research is that is has only been demonstrated on English coreference. Using a lemma-based heuristic requires using a lemmatization algorithm in the preprocessing phase and for more morphologically complex languages, especially low-resourced ones, lemmatization technology is less well-developed and may not be a usable part of our pipeline. Application to more morphologically-rich languages is among our planned research directions.
292
+
293
+ In addition, all our experiments are performed on the gold standard mentions from $\mathrm{ECB + }$ and GVC, meaning that coreference resolution is effectively independent of mention detection, and therefore we have no evidence how our method would fare in a pipeline where the two are coupled.
294
+
295
+ A further limitation is that training of the cross-encoders still requires intensive usage of GPU hardware (the GPU used for training Longformer is particularly high-end).
296
+
297
+ # Ethics Statement
298
+
299
+ We use publicly-available datasets, meaning any bias or offensive content in those datasets risks being reflected in our results. By its nature, the Gun Violence Corpus contains violent content that may be troubling for some.
300
+
301
+ We make extensive use of GPUs for training the discriminator models as part of our pipeline. While this has implications for resource consumption and access implications for those without similar hardware, the linear time complexity of our solution presents a way forward that relies less overall on GPU hardware than previous approaches, increasing the ability to perform event coreference resolution in low-compute settings.
302
+
303
+ # Acknowledgements
304
+
305
+ We would like to express our sincere gratitude to the anonymous reviewers whose insightful comments and constructive feedback helped to greatly improve the quality of this paper. We gratefully acknowledge the support of U.S. Defense Advanced Research Projects Agency (DARPA) FA8750-18-2-0016-AIDA - RAMFIS: Representations of vectors and Abstract Meanings for Information Synthesis. Any opinions, findings, conclusions, or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of
306
+
307
+ DARPA or the U.S. government. Finally, we extend our thanks to the BoulderNLP group and the SIG-NAL Lab at Colorado State for their valuable input and collaboration throughout the development of this work.
308
+
309
+ # References
310
+
311
+ Emily Allaway, Shuai Wang, and Miguel Ballesteros. 2021. Sequential cross-document coreference resolution. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 4659-4671, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
312
+ Amit Bagga and Breck Baldwin. 1998. Algorithms for scoring coreference chains. In In The First International Conference on Language Resources and Evaluation Workshop on Linguistics Coreference, pages 563-566.
313
+ Breck Baldwin. 1997. CogNIAC: high precision coreference with limited knowledge and linguistic resources. In *Operational Factors in Practical, Robust Anaphora Resolution for Unrestricted Texts*.
314
+ Hangbo Bao, Li Dong, Songhao Piao, and Furu Wei. 2021. Beit: Bert pre-training of image transformers. arXiv preprint arXiv:2106.08254.
315
+ Shany Barhom, Vered Shwartz, Alon Eirew, Michael Bugert, Nils Reimers, and Ido Dagan. 2019. Revisiting joint modeling of cross-document entity and event coreference resolution. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4179-4189, Florence, Italy. Association for Computational Linguistics.
316
+ Cosmin Bejan and Sanda Harabagiu. 2010. Unsupervised event coreference resolution with rich linguistic features. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 1412-1422, Uppsala, Sweden. Association for Computational Linguistics.
317
+ Iz Beltagy, Matthew E Peters, and Arman Cohan. 2020. Longformer: The long-document transformer. arXiv e-prints, pages arXiv-2004.
318
+ Susan Windisch Brown, Julia Bonn, Ghazaleh Kazeminejad, Annie Zaenen, James Pustejovsky, and Martha Palmer. 2022. Semantic representations for nlp using verbnet and the generative lexicon. Frontiers in artificial intelligence, 5.
319
+ Susan Windisch Brown, Dmitriy Dligach, and Martha Palmer. 2014. Verbnet class assignment as a wsd task. Computing Meaning: Volume 4, pages 203-216.
320
+ Michael Bugert, Nils Reimers, Shany Barhom, Ido Dagan, and Iryna Gurevych. 2020. Breaking the subtopic barrier in cross-document event coreference resolution. In Text2story@ecir, pages 23-29.
321
+
322
+ Michael Bugert, Nils Reimers, and Iryna Gurevych. 2021. Generalizing cross-document event coreference resolution across multiple corpora. Computational Linguistics, 47(3):575-614.
323
+ Avi Caciularu, Arman Cohan, Iz Beltagy, Matthew Peters, Arie Cattan, and Ido Dagan. 2021. CDLM: Cross-document language modeling. In *Findings of the Association for Computational Linguistics: EMNLP* 2021, pages 2648-2662, Punta Cana, Dominican Republic. Association for Computational Linguistics.
324
+ Arie Cattan, Alon Eirew, Gabriel Stanovsky, Mandar Joshi, and Ido Dagan. 2021. Cross-document coreference resolution over predicted mentions. In *Findings of the Association for Computational Linguistics: ACL-IJCNLP* 2021, pages 5100-5107, Online. Association for Computational Linguistics.
325
+ Agata Cybulska and Piek Vossen. 2014. Using a sledgehammer to crack a nut? lexical diversity and event coreference resolution. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14), pages 4545-4552, Reykjavik, Iceland. European Language Resources Association (ELRA).
326
+ Agata Cybulska and Piek Vossen. 2015. Translating granularity of event slots into features for event coreference resolution. In Proceedings of the The 3rd Workshop on EVENTS: Definition, Detection, Coreference, and Representation, pages 1-10, Denver, Colorado. Association for Computational Linguistics.
327
+ Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. 2021. An image is worth 16x16 words: Transformers for image recognition at scale. In International Conference on Learning Representations.
328
+ William Held, Dan Iter, and Dan Jurafsky. 2021. Focus on what matters: Applying discourse coherence theory to cross document coreference. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 1406-1417, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
329
+ Kian Kenyon-Dean, Jackie Chi Kit Cheung, and Doina Precup. 2018. Resolving event coreference with supervised representation learning and clustering-oriented regularization. arXiv preprint arXiv:1805.10985.
330
+ Heeyoung Lee, Angel Chang, Yves Peirsman, Nathanael Chambers, Mihai Surdeanu, and Dan Jurafsky. 2013. Deterministic coreference resolution based on entity-centric, precision-ranked rules. Computational linguistics, 39(4):885-916.
331
+ Heeyoung Lee, Marta Recasens, Angel Chang, Mihai Surdeanu, and Dan Jurafsky. 2012. Joint entity and
332
+
333
+ event coreference resolution across documents. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 489-500, Jeju Island, Korea. Association for Computational Linguistics.
334
+ Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, and Baining Guo. 2021. Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF international conference on computer vision, pages 10012-10022.
335
+ Xiaoqiang Luo. 2005. On coreference resolution performance metrics. In Proceedings of the Conference on Human Language Technology and Empirical Methods in Natural Language Processing, HLT '05, page 25-32, USA. Association for Computational Linguistics.
336
+ Xiaoqiang Luo, Sameer Pradhan, Marta Recasens, and Eduard Hovy. 2014. An extension of BLANC to system mentions. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 24-29, Baltimore, Maryland. Association for Computational Linguistics.
337
+ Yehudit Meged, Avi Caciularu, Vered Shwartz, and Ido Dagan. 2020. Paraphrasing vs coreferring: Two sides of the same coin. In *Findings of the Association for Computational Linguistics: EMNLP* 2020, pages 4897-4907, Online. Association for Computational Linguistics.
338
+ Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781.
339
+ Nafise Sadat Moosavi, Leo Born, Massimo Poesio, and Michael Strube. 2019. Using automatically extracted minimum spans to disentangle coreference evaluation from boundary detection. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4168-4178, Florence, Italy. Association for Computational Linguistics.
340
+ Martha Palmer, Daniel Gildea, and Paul Kingsbury. 2005. The Proposition Bank: An annotated corpus of semantic roles. Computational Linguistics, 31(1):71-106.
341
+ Marten Postma, Filip Ilievski, and Piek Vossen. 2018. SemEval-2018 task 5: Counting events and participants in the long tail. In Proceedings of The 12th International Workshop on Semantic Evaluation, pages 70–80, New Orleans, Louisiana. Association for Computational Linguistics.
342
+ Sameer Pradhan, Julia Bonn, Skatje Myers, Kathryn Conger, Tim O'gorman, James Gung, Kristin Wright-bettner, and Martha Palmer. 2022. PropBank comes of Age—Larger, smarter, and more diverse. In Proceedings of the 11th Joint Conference on Lexical and
343
+
344
+ Computational Semantics, pages 278-288, Seattle, Washington. Association for Computational Linguistics.
345
+
346
+ Sameer Pradhan, Xiaogiang Luo, Marta Recasens, Edward Hovy, Vincent Ng, and Michael Strube. 2014. Scoring coreference partitions of predicted mentions: A reference implementation. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 30-35, Baltimore, Maryland. Association for Computational Linguistics.
347
+
348
+ Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. 2021. Learning transferable visual models from natural language supervision. In International conference on machine learning, pages 8748-8763. PMLR.
349
+
350
+ Karin Kipper Schuler. 2005. VerbNet: A broad-coverage, comprehensive verb lexicon. University of Pennsylvania.
351
+
352
+ Veselin Stoyanov and Jason Eisner. 2012. Easy-first coreference resolution. In Proceedings of COLING 2012, pages 2519-2534.
353
+
354
+ Marc Vilain, John Burger, John Aberdeen, Dennis Connolly, and Lynette Hirschman. 1995. A model-theoretic coreference scoring scheme. In Proceedings of the 6th Conference on Message Understanding, MUC6 '95, page 45-52, USA. Association for Computational Linguistics.
355
+
356
+ Piek Vossen, Filip Ilievski, Marten Postma, and Roxane Segers. 2018. Don't annotate, but validate: A data-to-text method for capturing event data. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018).
357
+
358
+ Xiaodong Yu, Wenpeng Yin, and Dan Roth. 2022a. Pairwise representation learning for event coreference. In Proceedings of the 11th Joint Conference on Lexical and Computational Semantics, pages 69-78, Seattle, Washington. Association for Computational Linguistics.
359
+
360
+ Xiaodong Yu, Wenpeng Yin, and Dan Roth. 2022b. Pairwise representation learning for event coreference. In Proceedings of the 11th Joint Conference on Lexical and Computational Semantics, pages 69-78.
361
+
362
+ Yutao Zeng, Xiaolong Jin, Saiping Guan, Jiafeng Guo, and Xueqi Cheng. 2020. Event coreference resolution with their paraphrases and argument-aware embeddings. In Proceedings of the 28th International Conference on Computational Linguistics, pages 3084-3094, Barcelona, Spain (Online). International Committee on Computational Linguistics.
363
+
364
+ # A Ablation Study of Global Attention
365
+
366
+ Table 3 compares $\mathsf{D}_{\mathrm{long}}$ performance with and without Longformer global attention on the ECB+ and
367
+
368
+ <table><tr><td>Features</td><td>ECB+</td><td>GVC</td></tr><tr><td>w/o global attn.</td><td>85.0</td><td>76.5</td></tr><tr><td>w/ global attn.</td><td>82.9</td><td>77.0</td></tr></table>
369
+
370
+ Table 3: Table showing the CoNLL F1 scores from the D Encoder with and without Longformer Global Attention on GVC and $\mathrm{ECB + }$ dev sets.
371
+
372
+ GVC dev sets. This shows a dataset-specific contrast vis-à-vis sequence length where performance with global attention on GVC dev set is only marginally better than without, while the reverse is seen on the $\mathrm{ECB + }$ dev set. More specifically, this suggests that perhaps the "relevant" or "core" context for ECR lies closer to the neighborhood of event lemmas (wrapped by trigger tokens) than the CLS tokens (that use global attention) in both corpora, albeit more so in $\mathrm{ECB + }$ . As such, applying global attention to the CLS tokens here encodes more irrelevant context. Therefore, $\mathsf{D}_{\mathrm{long}}$ with Longformer global attention performs less well on $\mathrm{ECB + }$ while being almost comparable to $\mathsf{D}_{\mathrm{long}}$ without global attention on GVC.
373
+
374
+ # B Full Results
375
+
376
+ Table 4 shows complete results for all metrics from all models for within and cross-document coreference resolution on the GVC test set. Table 5 shows complete results for all metrics from all models on the $\mathrm{ECB + }$ test set.
377
+
378
+ # C Qualitative Error Examples
379
+
380
+ Table 6 presents an example of each type of error we identified in the output of our discriminator $(\mathsf{D}_{\mathrm{small}})$
381
+
382
+ <table><tr><td rowspan="2"></td><td colspan="3">MUC</td><td colspan="3">B3</td><td colspan="3">CEAFe</td><td colspan="3">LEA</td><td>CoNLL</td></tr><tr><td>R</td><td>P</td><td>F1</td><td>R</td><td>P</td><td>F1</td><td>R</td><td>P</td><td>F1</td><td>R</td><td>P</td><td>F1</td><td>F1</td></tr><tr><td>Bugert et al. (2021)</td><td>78.1</td><td>66.3</td><td>71.7</td><td>73.6</td><td>49.9</td><td>59.5</td><td>38.2</td><td>60.9</td><td>47.0</td><td>56.5</td><td>38.2</td><td>45.6</td><td>59.4</td></tr><tr><td>Held et al. (2021)</td><td>91.8</td><td>91.2</td><td>91.5</td><td>82.2</td><td>83.8</td><td>83.0</td><td>75.5</td><td>77.9</td><td>76.7</td><td>79.0</td><td>82.3</td><td>80.6</td><td>83.7</td></tr><tr><td>LH</td><td>94.8</td><td>82.0</td><td>87.9</td><td>90.1</td><td>28.5</td><td>43.3</td><td>16.3</td><td>47.8</td><td>24.3</td><td>85.1</td><td>23.9</td><td>37.4</td><td>51.8</td></tr><tr><td>LHOra</td><td>95.2</td><td>82.3</td><td>88.3</td><td>91.2</td><td>29.1</td><td>44.1</td><td>18.6</td><td>54.7</td><td>27.8</td><td>86.4</td><td>24.9</td><td>38.6</td><td>53.4</td></tr><tr><td>LH + Dsmall</td><td>87.0</td><td>89.6</td><td>88.3</td><td>82.3</td><td>67.9</td><td>74.4</td><td>62.0</td><td>55.2</td><td>58.4</td><td>77.6</td><td>57.8</td><td>66.2</td><td>73.7</td></tr><tr><td>LHOra + Dsmall</td><td>89.1</td><td>90.2</td><td>89.6</td><td>85.0</td><td>68.0</td><td>75.6</td><td>62.7</td><td>59.6</td><td>61.1</td><td>80.6</td><td>59.5</td><td>68.5</td><td>75.4</td></tr><tr><td>LH + Dlong</td><td>84.0</td><td>91.1</td><td>87.4</td><td>79.0</td><td>76.4</td><td>77.7</td><td>69.6</td><td>52.5</td><td>59.9</td><td>74.1</td><td>63.9</td><td>68.6</td><td>75.0</td></tr><tr><td>LHOra + Dlong</td><td>84.9</td><td>91.4</td><td>88.0</td><td>80.4</td><td>77.4</td><td>78.9</td><td>70.5</td><td>54.3</td><td>61.3</td><td>75.7</td><td>65.5</td><td>70.2</td><td>76.1</td></tr></table>
383
+
384
+ Table 4: Results on within and cross-document event coreference resolution on GVC test set. Bolded F1 values indicate current or previous state of the art according to that metric as well as our best model.
385
+
386
+ <table><tr><td rowspan="2"></td><td colspan="3">MUC</td><td colspan="3">B3</td><td colspan="3">CEAFe</td><td colspan="3">LEA</td><td>CoNLL</td></tr><tr><td>R</td><td>P</td><td>F1</td><td>R</td><td>P</td><td>F1</td><td>R</td><td>P</td><td>F1</td><td>R</td><td>P</td><td>F1</td><td>F1</td></tr><tr><td>Barhom et al. (2019)</td><td>78.1</td><td>84.0</td><td>80.9</td><td>76.8</td><td>86.1</td><td>81.2</td><td>79.6</td><td>73.3</td><td>76.3</td><td>64.6</td><td>72.3</td><td>68.3</td><td>79.5</td></tr><tr><td>Meged et al. (2020)</td><td>78.8</td><td>84.7</td><td>81.6</td><td>75.9</td><td>85.9</td><td>80.6</td><td>81.1</td><td>74.8</td><td>77.8</td><td>64.7</td><td>73.4</td><td>68.8</td><td>80.0</td></tr><tr><td>Cattan et al. (2021)</td><td>85.1</td><td>81.9</td><td>83.5</td><td>82.1</td><td>82.7</td><td>82.4</td><td>75.2</td><td>78.9</td><td>77.0</td><td>68.8</td><td>72.0</td><td>70.4</td><td>81.0</td></tr><tr><td>Zeng et al. (2020)</td><td>85.6</td><td>89.3</td><td>87.5</td><td>77.6</td><td>89.7</td><td>83.2</td><td>84.5</td><td>80.1</td><td>82.3</td><td>-</td><td>-</td><td>-</td><td>84.3</td></tr><tr><td>Yu et al. (2022b)</td><td>88.1</td><td>85.1</td><td>86.6</td><td>86.1</td><td>84.7</td><td>85.4</td><td>79.6</td><td>83.1</td><td>81.3</td><td>-</td><td>-</td><td>-</td><td>84.4</td></tr><tr><td>Allaway et al. (2021)</td><td>81.7</td><td>82.8</td><td>82.2</td><td>80.8</td><td>81.5</td><td>81.1</td><td>79.8</td><td>78.4</td><td>79.1</td><td>-</td><td>-</td><td>-</td><td>80.8</td></tr><tr><td>Caciularu et al. (2021)</td><td>87.1</td><td>89.2</td><td>88.1</td><td>84.9</td><td>87.9</td><td>86.4</td><td>83.3</td><td>81.2</td><td>82.2</td><td>76.7</td><td>77.2</td><td>76.9</td><td>85.6</td></tr><tr><td>Held et al. (2021)</td><td>87.0</td><td>88.1</td><td>87.5</td><td>85.6</td><td>87.7</td><td>86.6</td><td>80.3</td><td>85.8</td><td>82.9</td><td>74.9</td><td>73.2</td><td>74.0</td><td>85.7</td></tr><tr><td>LH</td><td>85.1</td><td>75.6</td><td>80.1</td><td>83.2</td><td>72.2</td><td>77.3</td><td>66.2</td><td>78.1</td><td>71.7</td><td>67.3</td><td>62.6</td><td>64.9</td><td>76.4</td></tr><tr><td>LHOra</td><td>99.1</td><td>79.6</td><td>88.3</td><td>97.9</td><td>67.7</td><td>80.0</td><td>65.9</td><td>93.7</td><td>77.4</td><td>85.1</td><td>63.8</td><td>72.9</td><td>81.9</td></tr><tr><td>LH + Dsmall</td><td>76.2</td><td>86.9</td><td>81.2</td><td>77.8</td><td>85.7</td><td>81.6</td><td>83.9</td><td>73.0</td><td>78.1</td><td>68.7</td><td>71.5</td><td>70.1</td><td>80.3</td></tr><tr><td>LHOra + Dsmall</td><td>89.8</td><td>87.6</td><td>88.7</td><td>90.7</td><td>80.2</td><td>85.1</td><td>82.5</td><td>85.1</td><td>83.8</td><td>83.3</td><td>72.2</td><td>77.3</td><td>85.9</td></tr><tr><td>LH + Dlong</td><td>80.0</td><td>87.3</td><td>83.5</td><td>79.6</td><td>85.4</td><td>82.4</td><td>83.1</td><td>75.5</td><td>79.1</td><td>70.5</td><td>73.3</td><td>71.9</td><td>81.7</td></tr><tr><td>LHOra + Dlong</td><td>93.7</td><td>87.9</td><td>90.7</td><td>94.1</td><td>79.6</td><td>86.3</td><td>81.6</td><td>88.7</td><td>85.0</td><td>86.8</td><td>73.2</td><td>79.4</td><td>87.4</td></tr></table>
387
+
388
+ Table 5: Results on within and cross-document event coreference resolution on $\mathrm{ECB + }$ test set with gold mentions and predicted topics. Bolded F1 values indicate current or previous state of the art according to that metric as well as our best model.
389
+
390
+ <table><tr><td>Category</td><td>Snippet</td></tr><tr><td>Adversarial/Conflicting</td><td>British climber &lt;m&gt; dies &lt;/m&gt; in New Zealand fall....The first of the &lt;m&gt; deaths &lt;/m&gt; this weekend was that of a New Zealand climber who fell on Friday morning.</td></tr><tr><td>Adversarial/Conflicting</td><td>British climber &lt;m&gt; dies &lt;/m&gt; in New Zealand fall....Australian Ski Mountaineer &lt;m&gt;Dies&lt;/m&gt; in Fall in New Zealand.</td></tr><tr><td>Adversarial/Conflicting</td><td>..Prosecutor Kym Worthy announces charges against individuals involved in the gun violence &lt;m&gt; deaths &lt;/m&gt; of children in Detroit .... Grandparents charged in 5-year - old &#x27;s shooting &lt;m&gt; death &lt;/m&gt; Buy Photo Wayne County Prosecutor Kym Worthy announces charges against individuals involved in the gun violence deaths of children...</td></tr><tr><td>Pronoun Lemmas</td><td>This just does not happen in this area whatsoever . It &lt;/m&gt;’s just unreal , ” said neighbor Sheila Rawlins....This &lt;/m&gt; just does not happen in this area whatsoever . It ’s just unreal , ” said neighbor Sheila Rawlins .</td></tr><tr><td>Set-Member Relationship</td><td>On Friday , Chicago surpassed 700 &lt;m&gt; homicides &lt;/m&gt; so far this year . ....&lt;m&gt;Homicide &lt;/m&gt; Watch Chicago Javon Wilson , the teenage grandson of U.S. Rep. Danny Davis , was shot to death over what police called an arugment over sneakers in his Englewood home Friday evening .</td></tr><tr><td>Weak Temporal Reasoning</td><td>Police : in an unrelated &lt;m&gt; incident &lt;m&gt; a man was shot at 3:18 a.m. Saturday in North Toledo ....Toledo mother grieves 3-year - old ’s &lt;m&gt; shooting &lt;/m&gt; death | Judge sets bond at 580,000 USD for Toledo man accused of rape , kidnapping | Toledo man sentenced to 11 years in New Year ’s Day shooting</td></tr><tr><td>Incomplete, Short Context</td><td>Ellen DeGeneres to &lt;m&gt; Host &lt;/m&gt; Oscars....It will be her second &lt;m&gt; stint &lt;/m&gt; in the job , after hosting the 2007 ceremony and earning an Emmy nomination for it .</td></tr><tr><td>Similar context, Different event times</td><td>near Farmington Road around 9 p.m. There they found a 32-year - old unidentified man with a &lt;m&gt; gunshot &lt;/m&gt; wound outside of a home ....The family was driving about 8:26 p.m. Sunday in the 1100 block of South Commerce Street when &lt;m&gt; gunshots were fired &lt;/m&gt; from a dark sedan that began following their vehicle...</td></tr><tr><td>Same Lemma, Ambiguous Context</td><td>Police : Man Shot To Death In Stockton Related To 3-Year - Old &lt;m&gt; Killed &lt;/m&gt; By Stray Bullet 2 p.m. UPDATE : Stockton Police have identified the man shot and killed on ...Police : Man Shot To Death In Stockton Related To 3-Year - Old Killed By Stray Bullet 2 p.m. UPDATE : Stockton Police have identified the man shot and killed &lt;/m&gt; on Tuesday night.</td></tr><tr><td>Lexically different, Semantically same</td><td>One man is dead after being &lt;m&gt; shot &lt;/m&gt; by a gunman ....Employees at a Vancouver wholesaler were coping Saturday with the death of their boss , who was gunned down &lt;/m&gt; at their office Christmas party .</td></tr><tr><td>Misc.</td><td>Baton Rouge Police have charged 17-year - old Ahmad Antoine of Baton Rouge with Negligent Homicide in the city ’s latest shooting &lt;m&gt; death &lt;/m&gt; ....Tagged Baton Rouge , &lt;m&gt; homicide &lt;/m&gt;.</td></tr></table>
391
+
392
+ Table 6: Qualitative Analysis on the hard mention pairs incorrectly linked (or missed) by our Discriminator $(\mathsf{D}_{\mathrm{small}})$ in the $\mathsf{ECB}+$ and GVC dev set: Underlined and bold-faced mentions surrounded by trigger tokens respectively indicate incorrect and missing assignments. Underlined spans without trigger tokens represents the category-specific quality being highlighted. The miscellaneous category (Misc.) refers to other errors including (reasonable) predictions that are either incorrect annotations in the gold data or incomplete gold sentences.
2023/2_n is better than n2_ Decomposing Event Coreference Resolution into Two Tractable Problems/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:04e98b6c986d4303a914f6576c2b2b9c484f2aef7e0a9554584414be32b89721
3
+ size 725158
2023/2_n is better than n2_ Decomposing Event Coreference Resolution into Two Tractable Problems/layout.json ADDED
The diff for this file is too large to render. See raw diff
 
2023/A Benchmark on Extremely Weakly Supervised Text Classification_ Reconcile Seed Matching and Prompting Approaches/3cfba471-572f-49ab-a5f2-cb3f967b96d6_content_list.json ADDED
The diff for this file is too large to render. See raw diff
 
2023/A Benchmark on Extremely Weakly Supervised Text Classification_ Reconcile Seed Matching and Prompting Approaches/3cfba471-572f-49ab-a5f2-cb3f967b96d6_model.json ADDED
The diff for this file is too large to render. See raw diff
 
2023/A Benchmark on Extremely Weakly Supervised Text Classification_ Reconcile Seed Matching and Prompting Approaches/3cfba471-572f-49ab-a5f2-cb3f967b96d6_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4f713ed70b143e4fd3ed107430ae5419bad70f497228bc09c3b6e4a19440c256
3
+ size 634174
2023/A Benchmark on Extremely Weakly Supervised Text Classification_ Reconcile Seed Matching and Prompting Approaches/full.md ADDED
@@ -0,0 +1,454 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # A Benchmark on Extremely Weakly Supervised Text Classification: Reconcile Seed Matching and Prompting Approaches
2
+
3
+ Zihan Wang $^{1*}$ Tianle Wang $^{2*}$ Dheeraj Mekala $^{1}$ Jingbo Shang $^{1\dagger}$
4
+
5
+ <sup>1</sup> University of California, San Diego
6
+
7
+ $^{2}$ Shanghai Jiao Tong University
8
+
9
+ {ziw224, dmekala, jshang} @ucsd.edu wtl666wtl@sjtu.edu.cn
10
+
11
+ # Abstract
12
+
13
+ EXtremely Weakly Supervised Text Classification (XWS-TC) refers to text classification based on minimal high-level human guidance, such as a few label-indicative seed words or classification instructions. There are two mainstream approaches for XWS-TC, however, never being rigorously compared: (1) training classifiers based on pseudo-labels generated by (softly) matching seed words (SEED) and (2) prompting (and calibrating) language models using classification instruction (and raw texts) to decode label words (PROMPT). This paper presents the first XWS-TC benchmark to compare the two approaches on fair grounds, where the datasets, supervisions, and hyperparameter choices are standardized across methods. Our benchmarking results suggest that (1) Both SEED and PROMPT approaches are competitive and there is no clear winner; (2) SEED is empirically more tolerant than PROMPT to human guidance (e.g., seed words, classification instructions, and label words) changes; (3) SEED is empirically more selective than PROMPT to the pre-trained language models; (4) Recent SEED and PROMPT methods have close connections and a clustering post-processing step based on raw in-domain texts is a strong performance booster to both. We hope this benchmark serves as a guideline in selecting XWS-TC methods in different scenarios and stimulate interest in developing guidance- and model-robust XWS-TC methods<sup>1</sup>.
14
+
15
+ # 1 Introduction
16
+
17
+ Recently there has been a significant advancement in the text classification with the emergence of Extremely Weakly Supervised Text Classification (XWS-TC) methods (Meng et al., 2020b; Wang et al., 2021; Zhang et al., 2021b; Zhao et al., 2022;
18
+
19
+ ![](images/4e68a3cb14955b4853d7f6376d1eb5ed2e9c6b6eb06cc06d376263928e47f7d0.jpg)
20
+ Figure 1: Illustrations of the XWS-TC problem and the SEED and PROMPT approaches.
21
+
22
+ Park and Lee, 2022), which requires no human-annotated datasets. Instead, these methods rely on minimal human guidance, such as the names of the classes or instructions describing the classification task. There are two main approaches to XWS-TC: one based on matching seed words (SEED), and the other on prompting a language model (LM) with instructions (PROMPT). We give a brief introduction in the following paragraphs, and a more thorough review is in Section 3.
23
+
24
+ SEED methods for XWS-TC rely on a user-specified list of seed words for each class, as well as an unlabeled in-domain corpus. These seed words are then expanded into a larger set of related words for the class through statistical methods (Mekala and Shang, 2020), embedding similarity (Wang et al., 2021), or masked language model predictions (Meng et al., 2020b). These related words are used to assign a pseudo-class to each text in the unlabeled corpus through some matching strategy (e.g., assign a text to a class if it contains the related words for that class). The pseudo labels are then used to train a classifier through standard fully-supervised fine-tuning.
25
+
26
+ On the other hand, PROMPT methods for XwSTC, rely on reformulating text using an instruction template and prompting the language model to generate the likelihoods for each label in the classification task (Brown et al., 2020). For example, in a sentiment classification task, using an
27
+
28
+ instruction template of <text>. sentiment:, the model generating "happy" or "sad" will help classify the sentiment of the text. Naive zero-shot prompting considers the highest likelihood label as the answer and recent improvements for more accurate likelihoods include calibration of likelihood scores (Holtzman et al., 2021; Zhao et al., 2021; Han et al., 2022) and verbalizers that find more label words to better represent the class (Schick and Schütze, 2021; Ma et al., 2023; Hu et al., 2022).
29
+
30
+ Both SEED and PROMPT methods have demonstrated strong performance in XWS-TC. However, there has been a lack of comprehensive comparison between these two approaches. This is due to the perception that the approaches are unrelated and the lack of standardization in datasets, supervision, and hyperparameter choices across methods.
31
+
32
+ We are motivated to construct a benchmark that fairly evaluates the performance of XWS-TC methods. The benchmark consists of 11 datasets covering four domains along with their fine-grained variants and different numbers of classes. In addition, we make an effort to use the same hyperparameters across datasets for the methods, as there should not be a development set to tune the hyperparameters in the XWS setting (Perez et al., 2021).
33
+
34
+ Our benchmarking results suggest that both SEED and PROMPT approaches are competitive, with no clear winner. SEED tends to perform better when both approaches use a similar-sized pretrained model and is more robust and tolerant to changes in human guidance (such as seed words, classification instructions, and label words). On the other hand, PROMPT methods have the ability to handle more general types of human guidance (such as descriptions of class names, rather than specific words) and do not have a strict requirement for an unlabeled corpus. When the underlying pre-trained language model changes, PROMPT is more robust and scales better with the language model than SEED. We also examine two specific methods from each approach, X-Class (Wang et al., 2021) and ProtoCal (Han et al., 2022), which independently proposed a post-processing approach to calibrate the class predictions through clustering on an unlabeled in-domain corpus to improve classification performance. Our results show that this subroutine can be a universal booster for both SEED and PROMPT approaches.
35
+
36
+ Through this benchmark, we aim to advance the study of XWS-TC methods and call for the develop
37
+
38
+ ment of methods that are robust to different human guidance and language models. We firmly believe that this paper will serve as a guide for selecting the appropriate method in different scenarios and contribute to the advancement of the field.
39
+
40
+ # 2 Related Work
41
+
42
+ # 2.1 Different Types of Weak Supervision
43
+
44
+ Extremely Weak Supervision is a setting that assumes access to only high-level human inputs, such as names of classes or instructions about classification criteria. We briefly discuss different types of minimal supervision in the following paragraphs.
45
+
46
+ Few-shot Supervision Few-shot supervision is the setting where there are only a small number of labeled examples for each of the classes. An intuitive way is to directly train the classifier on few-shot data, but usually that yields subpar performance. Another popular way is called in-context learning, where the few-shot supervision is used as context to prompt LM for the answer (Brown et al., 2020). Various methods have been proposed to improve it by searching for better label words (Schick and Schütze, 2021; Ma et al., 2023), stabilizing the output (Lu et al., 2022), and efficient fine-tuning (Gao et al., 2021).
47
+
48
+ Distant Supervision Distant supervision includes supervision from external resources such as encyclopedias or gazetteers. There have been efforts to incorporate external knowledge into prompting (Hu et al., 2022), phrase mining (Shang et al., 2018), and named entity recognition (Liang et al., 2020). External models can also be used to help with extremely weak supervision. A line of research is on leveraging models trained on natural language inference data to suggest better-related words (Park and Lee, 2022) or directly classify the text (Yin et al., 2019; Gera et al., 2022).
49
+
50
+ No Supervision Unsupervised methods fall into this category where they require no supervision. These methods typically take one of the two following approaches: (1) clustering (Aharoni and Goldberg, 2020), (2) topic modeling (Blei et al., 2003). However, both of these approaches lack control over the clusters/topics generated i.e. classes. For example, a text corpus can be categorized on several basis including topic, location, and sentiment. An unsupervised method cannot handle such scenarios. It would be beneficial to be able to retrieve all possible classifications of a corpus in an
51
+
52
+ unsupervised manner, but as far as we are aware, there are no methods with this ability.
53
+
54
+ # 2.2 Weak Supervision Benchmarks
55
+
56
+ We introduce two other Weak Supervision Benchmarks and talk about differences with this work.
57
+
58
+ Wrench (Zhang et al., 2021a) is a benchmark that explored various types of weak supervision labeling functions (i.e., rules used to label the text). They synthesize the performance of different labeling functions, ways to combine them, and the fine-tuning process to learn the pseudo-training data. In our benchmark, we analyze extremely weak text classifiers that go beyond the labeling functions and compare their performance and robustness with zero-shot prompting.
59
+
60
+ AutoWS-Bench-101 (Roberts et al., 2022) is another benchmark that analyzes how labeling functions help text classification along with additional few-shot supervision. They conclude that pretrained models are strong baselines for in-domain settings and should be considered integrating with weak supervision methods. In this work, we focus on extremely weak supervision methods without any labeled data. The SEED and PROMPT methods compared in this benchmark are all based on pre-trained language models.
61
+
62
+ # 2.3 Verbalizers
63
+
64
+ Verbalizers are a type of PROMPT method that find a larger set of label words so that the class choices are accurately represented. We did not consider Verbalizer methods in this benchmark since they mostly rely on additional supervision, such as few-shot (Schick and Schütze, 2021; Ma et al., 2023) or an external knowledge base (Hu et al., 2022).
65
+
66
+ # 3 Background
67
+
68
+ Extremely Weak Supervision in Text Classification refers to a few high-level human guidance as supervision. This guidance typically is in the form of seed words that describe each class, or an instruction paired with label words that define the task. There are two main approaches for XwSTC: matching seed words (SEED) and prompting language models (PROMPT).
69
+
70
+ # 3.1 Seed Matching Methods
71
+
72
+ SEED approaches are provided with a few class-indicative seed words and unlabeled documents as input. These methods typically involve seed
73
+
74
+ word expansion where more words related to provided seed words are identified in the unlabeled corpus through several statistics-based (Salton and Buckley, 1988; Mekala and Shang, 2020) or deep learning-based strategies (Meng et al., 2020b; Wang et al., 2021; Zhang et al., 2021b). Using these expanded seed words, each unlabeled document is pseudo-labeled. Different heuristics have been explored for pseudo-labeling such as string-matching (Meng et al., 2018). Recently, the matching approach has also evolved into softer manners such as embedding-based matching (Wang et al., 2021), and graph-based matching (Zhang et al., 2021b), that can address conflicts in a principled manner during pseudo-labeling.
75
+
76
+ We introduce 4 strong-performing SEED methods to include in our benchmark.
77
+
78
+ LotClass (Meng et al., 2020b) obtains related words through predicting masked tokens in a masked language modeling trained model (Devlin et al., 2019), over an unlabelled corpus. They match the text to related words by fine-tuning a model to predict the related words given a text.
79
+
80
+ XClass (Wang et al., 2021) obtains related words by finding words that have similar representations. They construct class-oriented representations for text. and match the text to related words by representation similarity. They also showed that the performance can be improved significantly by matching based on clusters from text representations.
81
+
82
+ ClassKG (Zhang et al., 2021b) models the dependence of related words as an annotating problem on the keyword graph.
83
+
84
+ NPPrompt (Zhao et al., 2022) obtains related words through embedding similarity from a pretrained LM. The related words are used as label words to prompt a generative LM for predictions, which are then aggregated as the matching result. To some extent, NPPrompt belongs to an intersection of PROMPT and SEED methods.
85
+
86
+ # 3.2 Prompt Methods
87
+
88
+ Prompting language models is another approach to extremely weak supervision in text classification. This approach involves prompting a generative language model with an instructive text and extracting the likelihoods of different label words. This approach does not require an unlabeled in-domain corpus and can be used to predict text in an online fashion. However, language models have been known to be biased towards text sequences more
89
+
90
+ common in pre-training data, leading to instability in zero-shot & few-shot settings. Recently proposed post-processing methods (Holtzman et al., 2021; Han et al., 2022) have attempted to address this by calibrating the predicted probabilities using estimates of the model's bias towards each verbalized label. We describe 2 calibration methods.
91
+
92
+ DC-PMI (Holtzman et al., 2021) considers a null prompt to obtain the raw likelihoods of language model to predict each label. Then, for each text, they modify the likelihood of the predicted label by marginalizing the raw ones.
93
+
94
+ ProtoCal (Han et al., 2022) considers an unlabelled corpus and obtains the predicted likelihoods on the corpus. The likelihood vectors are then clustered to better obtain the prediction boundary for each class. Instead of maximum likelihood, this prediction boundary is used to predict the class.
95
+
96
+ Some more SEED and PROMPT methods are described in Appendix A.
97
+
98
+ # 4 Benchmark
99
+
100
+ In order to establish a benchmark that can accurately evaluate various XWS-TC methods, it is essential to consider a range of factors: Dataset choices, Instructions, Label words, Hyperparameter control, use of Pre-trained Language Models, Metrics and ensure their consistency across all experiments. We will discuss each of these factors in detail in the following sections.
101
+
102
+ # 4.1 Dataset
103
+
104
+ We consider datasets from prior evaluations (Holtzman et al., 2021; Wang et al., 2021; Meng et al., 2020b) that contain data from diverse domains. To facilitate the evaluation process, the size of the evaluation set for each dataset has been controlled to a few thousand instances. Additionally, as many XWS-TC methods require the use of an unlabelled in-domain corpus, a similar-sized sample has been sampled from the training split to serve this purpose, with the evaluation set and unlabelled corpus being disjoint. The datasets have been uniformly sampled without altering the distribution of labels, thus preserving the imbalance ratio, which is defined as the ratio between the size of the largest class and the smallest class. The statistics of the datasets are presented in Table 1. Details of the sources of the datasets are in Appendix B.
105
+
106
+ # 4.2 Instructions and Label/Seed Words
107
+
108
+ To fairly compare SEED and PROMPT methods, we need to provide equal amounts of human supervision. That means, for SEED methods, we should only allow a single word for each class, matching the amount used for label words. For instructions, we consider simple ones that hint at the classification criteria (Holtzman et al., 2021). Details choices can be found in Appendix C.
109
+
110
+ # 4.3 Metrics
111
+
112
+ For evaluation metrics, we consider the macro $\mathrm{F_1}$ score on a dataset-by-dataset basis, which values each class within a dataset equally. To understand the performance of a method on all datasets, we employ two metrics: the average of the macro $\mathrm{F_1}$ scores, and a ranking-based metric that combines the ranking of methods on each dataset to obtain a scale-prone value (Colombo et al., 2022).
113
+
114
+ # 4.4 Hyperparameters
115
+
116
+ Another crucial aspect of the benchmark is the number of hyperparameters utilized by each method. In the context of extremely weak supervision, we argue that it is unrealistic to use different hyperparameters for different datasets, as doing so would necessitate the use of a separate development set, thereby defeating the purpose of using only high-level human supervision (Perez et al., 2021). Therefore, we slightly tune the hyperparameters on one of the datasets to rule out failing scenarios and then stick with a single choice of hyperparameters throughout all datasets. Under this hyperparameter enforcement, the ideal method should exhibit consistent performance across all datasets.
117
+
118
+ # 4.5 Pre-trained Language Models
119
+
120
+ PROMPT methods use generative language models such as GPT while SEED methods use representation encoding language models such as BERT. To fairly compare methods between these two approaches on XWS-TC, we have to consider the ability of language models as a factor. We use the number of parameters of the pre-trained language model as an approximation of the power of the language model. Since all language models use the transformer as the backbone, this implies that the number of layers and size of hidden states is controlled. A further discussion is in Appendix D.
121
+
122
+ <table><tr><td>Name</td><td>Domain</td><td># Classes</td><td>||Unlabelled||</td><td>||Eval||</td><td>Imbalance</td></tr><tr><td>IMDB</td><td>Reviews/Sentiment</td><td>2</td><td>5000</td><td>5000</td><td>1.0</td></tr><tr><td>Yelp-2</td><td>Reviews/Sentiment</td><td>2</td><td>5600</td><td>3800</td><td>1.1</td></tr><tr><td>Yelp-5</td><td>Reviews/Sentiment</td><td>5</td><td>6500</td><td>5000</td><td>1.1</td></tr><tr><td>AGNews</td><td>News/Topic</td><td>4</td><td>6000</td><td>7600</td><td>1.0</td></tr><tr><td>20News</td><td>News/Topic</td><td>5</td><td>6254</td><td>5362</td><td>1.9</td></tr><tr><td>20News-Fine</td><td>News/Topic</td><td>17</td><td>5589</td><td>4792</td><td>1.3</td></tr><tr><td>NYT-S</td><td>News/Topic</td><td>5</td><td>4578</td><td>3925</td><td>17.1</td></tr><tr><td>NYT-S-Fine</td><td>News/Topic</td><td>26</td><td>4034</td><td>3459</td><td>96.3</td></tr><tr><td>NYT</td><td>News/Topic</td><td>9</td><td>5119</td><td>6400</td><td>30.7</td></tr><tr><td>NYT-Loc</td><td>News/Location</td><td>10</td><td>5119</td><td>6400</td><td>17.1</td></tr><tr><td>DBpedia</td><td>Wikipedia/Ontology</td><td>14</td><td>5600</td><td>7000</td><td>1.3</td></tr></table>
123
+
124
+ Table 1: Dataset statistics in our benchmark.
125
+
126
+ # 4.6 Large Language Models
127
+
128
+ This benchmark specifically excludes the evaluation of (multi-task) fine-tuned language models such as T0 (Sanh et al., 2022), large language models (LLMs) such as GPT3, and human feedback-trained language models like InstructGPT (Ouyang et al., 2022) and ChatGPT because there are no equivalent representation encoding language models for the SEED approaches. We discuss this in more details and include an evaluation of ChatGPT on a single dataset as a reference in Appendix E.
129
+
130
+ # 5 Benchmark Experiments
131
+
132
+ # 5.1 Main Results
133
+
134
+ In Table 2 we show the performances of all SEED and PROMPT methods considered in the benchmark across the 11 datasets and report the average macro $\mathrm{F_1}$ performance and the rank score.
135
+
136
+ Performance of PROMPT Methods We note that the performance of the standalone PROMPT method is about 20 points lower than its counterparts with calibration methods. The use of additional instance independent instructions (DCPMI) or an additional clustering based on unlabelled text (ProtoCal) is crucial for PROMPT methods to work well in XWS (zero-shot) text classification.
137
+
138
+ Performance of SEED Methods All the SEED methods exhibit strong performance, with X-Class performing stably well across all datasets, and ClassKG performing the best on several datasets, but losing on certain fine-grained datasets.
139
+
140
+ Comparing PROMPT and SEED Methods First, on the absolute performances, we can see that
141
+
142
+ SEED methods have overall better performance than PROMPT methods, even when appropriate calibration is added for PROMPT methods. However, we can also observe that a larger pre-trained GPT model increases the performance of PROMPT methods quite significantly, while SEED methods have a lower performance improvement when a larger pre-trained language model is used. This effect is further studied in Section 5.2.3.
143
+
144
+ # 5.2 Robustness
145
+
146
+ Through this benchmark, we hope to not only decide which method performs the best, but also analyze under dynamic circumstances, which method is more robust to changes. Different choices of label words/seed words, instructions, and pre-trained language models can happen in real life. Therefore, the robustness of methods when these ingredients are reasonably varied would indicate how stable the method is under varying circumstances. Due to the complexity of multiple runs of each method, we focus on 4 datasets pertaining to different domains, imbalance ratios, and number of classes: Yelp, AGNews, NYT-S, and DBpedia. We leave out two methods, LoT-Class and NPPrompt to save computational resources.
147
+
148
+ # 5.2.1 Different Seed/Label words
149
+
150
+ In Table 3 we explore the effect when a different choice of label words and seed words are used. For example, for Yelp-2, we chose negative/positive, terrible/great bad/good, awful/find, and nasty/nice as the variants. We report the performance of the methods on each of the five choices, and also the aggregated performance over the 4 aforementioned datasets. We notice that PROMPT methods in general have a high instability. While DCPMI and Pro
151
+
152
+ <table><tr><td>Method</td><td>Model</td><td>IMDB</td><td>Yelp-2</td><td>Yelp-5</td><td>AGNews</td><td>20News</td><td>20News-Fine</td><td>NYT-S</td><td>NYT-S-Fine</td><td>NYT</td><td>NYT-Loc</td><td>DBpedia</td><td>Average</td><td>Rank Score</td></tr><tr><td colspan="15">PROMPT</td></tr><tr><td rowspan="2">Prompt</td><td>GPT2-small</td><td>56.42</td><td>47.36</td><td>7.62</td><td>38.42</td><td>36.32</td><td>28.76</td><td>22.45</td><td>38.90</td><td>33.44</td><td>60.32</td><td>13.93</td><td>34.90</td><td>0</td></tr><tr><td>GPT2-medium</td><td>35.80</td><td>33.57</td><td>25.87</td><td>69.36</td><td>55.16</td><td>46.03</td><td>54.08</td><td>46.14</td><td>24.92</td><td>79.00</td><td>24.52</td><td>44.95</td><td>1</td></tr><tr><td rowspan="2">Prompt + DCPMI</td><td>GPT2-small</td><td>70.13</td><td>65.34</td><td>23.01</td><td>72.67</td><td>61.64</td><td>37.45</td><td>73.93</td><td>63.19</td><td>55.20</td><td>70.40</td><td>51.10</td><td>58.55</td><td>4</td></tr><tr><td>GPT2-medium</td><td>63.24</td><td>87.00</td><td>11.34</td><td>74.13</td><td>61.15</td><td>52.74</td><td>79.80</td><td>67.66</td><td>58.44</td><td>87.35</td><td>57.30</td><td>63.65</td><td>8</td></tr><tr><td rowspan="2">Prompt + ProtoCal</td><td>GPT2-small</td><td>70.35</td><td>65.89</td><td>23.77</td><td>72.66</td><td>58.62</td><td>36.77</td><td>53.69</td><td>29.82</td><td>55.15</td><td>65.80</td><td>51.97</td><td>53.14</td><td>2</td></tr><tr><td>GPT2-medium</td><td>70.58</td><td>88.60</td><td>36.62</td><td>75.26</td><td>62.58</td><td>48.55</td><td>51.97</td><td>46.85</td><td>59.04</td><td>72.45</td><td>66.46</td><td>61.54</td><td>9</td></tr><tr><td colspan="15">SEED</td></tr><tr><td rowspan="2">LoT-Class</td><td>BERT-base</td><td>58.56</td><td>67.96</td><td>24.92</td><td>73.94</td><td>70.57</td><td>9.40</td><td>61.36</td><td>23.05</td><td>48.59</td><td>67.13</td><td>57.98</td><td>51.2</td><td>3</td></tr><tr><td>BERT-large</td><td>81.03</td><td>77.03</td><td>25.17</td><td>68.25</td><td>65.71</td><td>45.51</td><td>44.00</td><td>37.11</td><td>43.08</td><td>80.55</td><td>58.04</td><td>56.86</td><td>5</td></tr><tr><td rowspan="2">X-Class</td><td>BERT-base</td><td>82.89</td><td>85.44</td><td>28.80</td><td>81.81</td><td>76.98</td><td>58.78</td><td>91.94</td><td>61.06</td><td>67.19</td><td>86.38</td><td>89.50</td><td>73.71</td><td>10</td></tr><tr><td>BERT-large</td><td>82.05</td><td>90.39</td><td>31.02</td><td>85.91</td><td>77.52</td><td>59.98</td><td>87.53</td><td>68.40</td><td>68.73</td><td>85.77</td><td>87.91</td><td>75.02</td><td>12</td></tr><tr><td rowspan="2">ClassKG</td><td>BERT-base</td><td>88.08</td><td>92.21</td><td>32.33</td><td>88.10</td><td>81.72</td><td>52.29</td><td>84.12</td><td>49.59</td><td>60.79</td><td>92.81</td><td>94.75</td><td>74.25</td><td>13</td></tr><tr><td>BERT-large</td><td>90.96</td><td>93.10</td><td>39.41</td><td>87.30</td><td>83.84</td><td>51.62</td><td>80.95</td><td>59.95</td><td>56.31</td><td>91.03</td><td>72.74</td><td>73.38</td><td>11</td></tr><tr><td rowspan="2">NPPrompt</td><td>Roberta-base</td><td>85.19</td><td>81.17</td><td>14.20</td><td>80.42</td><td>68.92</td><td>48.64</td><td>77.76</td><td>55.23</td><td>64.46</td><td>53.85</td><td>60.36</td><td>62.75</td><td>7</td></tr><tr><td>Roberta-large</td><td>85.67</td><td>93.58</td><td>23.45</td><td>83.62</td><td>69.82</td><td>43.33</td><td>77.93</td><td>35.91</td><td>59.96</td><td>65.83</td><td>47.11</td><td>62.38</td><td>6</td></tr></table>
153
+
154
+ Table 2: Performance of PROMPT and SEED methods on the benchmark with standard models, prompt instructions, label words, and seed word choices. All scores are higher the better.
155
+
156
+ <table><tr><td rowspan="2">Method</td><td rowspan="2">Model</td><td rowspan="2">default</td><td rowspan="2">alt. 1</td><td rowspan="2">alt. 2</td><td rowspan="2">Yelp-2 alt. 3</td><td rowspan="2">alt. 4</td><td rowspan="2">Median</td><td rowspan="2">Average (std)</td><td colspan="3">Averaged over Datasets</td></tr><tr><td>Median</td><td>Average</td><td>std</td></tr><tr><td colspan="12">PROMPT</td></tr><tr><td rowspan="2">Prompt</td><td>GPT2-small</td><td>47.36</td><td>49.34</td><td>32.84</td><td>58.19</td><td>32.24</td><td>47.36</td><td>43.99 (10.04)</td><td>32.88</td><td>31.01</td><td>6.37</td></tr><tr><td>GPT2-medium</td><td>33.57</td><td>32.89</td><td>32.84</td><td>55.10</td><td>32.78</td><td>32.89</td><td>37.44 (8.84)</td><td>39.39</td><td>40.70</td><td>8.77</td></tr><tr><td rowspan="2">Prompt + DCPMI</td><td>GPT2-small</td><td>65.34</td><td>57.19</td><td>72.80</td><td>45.12</td><td>56.98</td><td>57.19</td><td>59.49 (9.27)</td><td>61.81</td><td>62.46</td><td>5.13</td></tr><tr><td>GPT2-medium</td><td>87.00</td><td>66.65</td><td>36.53</td><td>75.31</td><td>39.23</td><td>66.65</td><td>60.94 (19.93)</td><td>68.56</td><td>66.54</td><td>7.26</td></tr><tr><td rowspan="2">Prompt + ProtoCal</td><td>GPT2-small</td><td>65.89</td><td>54.59</td><td>70.43</td><td>58.03</td><td>63.72</td><td>63.72</td><td>62.53 (5.63)</td><td>64.62</td><td>64.03</td><td>6.17</td></tr><tr><td>GPT2-medium</td><td>88.60</td><td>87.31</td><td>90.53</td><td>80.53</td><td>68.59</td><td>87.21</td><td>83.11 (8.00)</td><td>72.17</td><td>70.74</td><td>8.76</td></tr><tr><td colspan="12">SEED</td></tr><tr><td rowspan="2">X-Class</td><td>BERT-base</td><td>85.44</td><td>88.01</td><td>85.69</td><td>62.24</td><td>84.33</td><td>85.44</td><td>81.14 (9.53)</td><td>86.18</td><td>83.83</td><td>5.70</td></tr><tr><td>BERT-large</td><td>90.39</td><td>89.71</td><td>88.70</td><td>84.75</td><td>85.49</td><td>88.70</td><td>87.81 (2.27)</td><td>83.77</td><td>83.36</td><td>4.47</td></tr><tr><td rowspan="2">ClassKG</td><td>BERT-base</td><td>92.21</td><td>91.71</td><td>87.78</td><td>91.18</td><td>92.47</td><td>91.71</td><td>91.07 (1.70)</td><td>87.71</td><td>85.88</td><td>4.45</td></tr><tr><td>BERT-large</td><td>93.10</td><td>93.16</td><td>94.13</td><td>93.89</td><td>92.01</td><td>93.16</td><td>93.26 (0.74)</td><td>84.93</td><td>85.40</td><td>3.74</td></tr></table>
157
+
158
+ Table 3: Performance of PROMPT and SEED methods when the label word/seed word are changed to similar meaning alternatives. We show the performance on 5 choices of label words on Yelp-2 (4 alternatives + 1 default), its median, average, and standard deviation, and the averaged metrics across all datasets.
159
+
160
+ toCal can remedy the variance a bit, SEED methods are still more robust to changes of seed words.
161
+
162
+ # 5.2.2 Different Instructions
163
+
164
+ A high variance is also observed when the instructions are changed for the PROMPT methods, as in Table 4. A noticeable trend is that when the pre-trained model is larger, while the performance increases, the variance brought by instructions or label words also increases. This could be alarming for PROMPT methods.
165
+
166
+ # 5.2.3 Different Pre-trained Language Models
167
+
168
+ In Table 5 we analyze how changes in pre-trained language models would affect the performance of SEED and PROMPT methods (See Appendix H for the full table). Although SEED performs better
169
+
170
+ than PROMPT, PROMPT methods has a strong increasing trend as the size of the pre-trained language model (e.g., changing from BERT-base to BERT-large). Also, X-Class and NPPrompt fail on RoBERTa and BERT respectively, which we hypothesize is that assumptions made in the methods are not general to all pre-trained language models; for example, the distribution of similarities of representations generated by a language model might be different by models. This scaling trend is a factor that should be taken into selecting methods to use for XWS-TC, when the language model size is different than evaluated in this benchmark.
171
+
172
+ <table><tr><td rowspan="2">Method</td><td rowspan="2">Model</td><td rowspan="2">default</td><td rowspan="2">alt. 1</td><td rowspan="2">alt. 2</td><td colspan="3">Yelp-2</td><td rowspan="2">Median</td><td rowspan="2">Average (std)</td><td colspan="3">Averaged over Datasets</td></tr><tr><td>alt. 3</td><td>alt. 4</td><td>Median</td><td>Median</td><td>Average</td><td>std</td></tr><tr><td rowspan="2">Prompt</td><td>GPT2-small</td><td>47.36</td><td>32.89</td><td>37.31</td><td>73.11</td><td>39.01</td><td>39.01</td><td>45.94 (14.37)</td><td>31.06</td><td>32.32</td><td>8.40</td><td></td></tr><tr><td>GPT2-medium</td><td>33.57</td><td>33.18</td><td>56.77</td><td>78.41</td><td>42.34</td><td>42.34</td><td>48.85 (17.08)</td><td>38.34</td><td>39.11</td><td>11.73</td><td></td></tr><tr><td rowspan="2">Prompt + DMCPMI</td><td>GPT2-small</td><td>65.34</td><td>76.96</td><td>50.14</td><td>48.83</td><td>39.53</td><td>50.14</td><td>56.16 (13.29)</td><td>60.00</td><td>61.48</td><td>6.45</td><td></td></tr><tr><td>GPT2-medium</td><td>87.00</td><td>88.03</td><td>48.56</td><td>79.67</td><td>67.76</td><td>79.67</td><td>74.20 (14.72)</td><td>65.26</td><td>61.54</td><td>14.18</td><td></td></tr><tr><td rowspan="2">Prompt + ProtoCal</td><td>GPT2-small</td><td>65.89</td><td>83.87</td><td>60.54</td><td>71.23</td><td>72.25</td><td>72.25</td><td>70.76 (7.78)</td><td>65.54</td><td>64.80</td><td>6.23</td><td></td></tr><tr><td>GPT2-medium</td><td>88.60</td><td>87.40</td><td>57.85</td><td>80.13</td><td>82.73</td><td>82.73</td><td>79.34 (11.18)</td><td>62.59</td><td>62.07</td><td>10.85</td><td></td></tr></table>
173
+
174
+ Table 4: Performance of PROMPT methods when the instructions are changed to similar meaning alternatives. We show the performance on 5 choices of instructions on Yelp-2 (4 alternatives + 1 default), its median, average, and standard deviation, and the averaged metrics across all datasets.
175
+
176
+ ![](images/debe4db1199d4f46b5f634d0c9f2fa6d63e1724f2b784ccc7d848ab5ebb79b04.jpg)
177
+ Figure 2: We highlight similarities (green) between a SEED method X-Class (orange) and two PROMPT methods Verbalizers and ProtoCal (blue).
178
+
179
+ # 6 Connections between Recent SEED and PROMPT Methods
180
+
181
+ While PROMPT is introduced by the seminal GPT-3 paper (Brown et al., 2020) not too long ago, SEED has a longer history and can be traced back to early tfidf retrieval methods (Salton and Buckley, 1988). In recent years, SEED methods and PROMPT methods are exploring similar ideas. SEED methods have been leveraging pre-trained language models to better understand the semantics of seed words; for example, by asking the language model to fill in masks (Meng et al., 2020b) or through means of representation similarities (Wang et al., 2021; Zhao et al., 2022). PROMPT methods have been exploring calibration and verbalizers to improve and stabilize its predictions. Verbalizer includes a step of finding more label words that better represent the class, which is a similar approach used in SEED. We show that a recent representative SEED method X-Class and two PROMPT methods, Verbalizers and ProtoCal have higher similarities and deeper connections in their design. This is particularly interesting as both directions have been developing independently. In Figure 2, we provide a pipeline of the methods and highlight the similarities.
182
+
183
+ # 6.1 Obtaining Text Representations
184
+
185
+ X-Class matches text to classes by learning class-oriented text representations from an encoder-based language model. X-Class views class representations as the union of representations describing the words. The text representation in X-Class is defined as a weighted average of individual token representations where the weights are based on their respective similarity to the class representations. On the other hand, general prompting relies on a decoder-based language model to produce a next token representation. In the penultimate layer of the decoder, the last token representation is computed by an attention mechanism over all other tokens, which essentially produces a weighted average of all the token representations.
186
+
187
+ In both methods, the text representation is obtained using an attention-like weighted average of tokens in the text. The attention is guided such that the output representation is indicative of the class. X-Class uses signals from class names to guide the attention while prompting relies on the understanding of the instruction.
188
+
189
+ # 6.2 Obtaining Predicted Likelihoods
190
+
191
+ PROMPT methods obtain likelihoods of the class by comparing the similarity of the next token rep
192
+
193
+ <table><tr><td>Method</td><td>Model</td><td>Average</td><td>Rank Score</td></tr><tr><td colspan="4">PROMPT</td></tr><tr><td rowspan="6">Prompt</td><td>GPT2-small</td><td>30.54</td><td>1</td></tr><tr><td>GPT2-medium</td><td>45.38</td><td>8</td></tr><tr><td>BERT-base</td><td>43.04</td><td>7</td></tr><tr><td>BERT-large</td><td>51.84</td><td>15</td></tr><tr><td>RoBERTa-base</td><td>45.71</td><td>6</td></tr><tr><td>RoBERTa-large</td><td>59.85</td><td>22</td></tr><tr><td rowspan="6">Prompt + DCPMI</td><td>GPT2-small</td><td>65.76</td><td>24</td></tr><tr><td>GPT2-medium</td><td>74.56</td><td>31</td></tr><tr><td>BERT-base</td><td>60.52</td><td>23</td></tr><tr><td>BERT-large</td><td>55.88</td><td>14</td></tr><tr><td>RoBERTa-base</td><td>47.14</td><td>5</td></tr><tr><td>RoBERTa-large</td><td>55.86</td><td>18</td></tr><tr><td rowspan="6">Prompt + ProtoCal</td><td>GPT2-small</td><td>61.05</td><td>21</td></tr><tr><td>GPT2-medium</td><td>70.07</td><td>30</td></tr><tr><td>BERT-base</td><td>55.74</td><td>11</td></tr><tr><td>BERT-large</td><td>70.16</td><td>25</td></tr><tr><td>RoBERTa-base</td><td>61.07</td><td>20</td></tr><tr><td>RoBERTa-large</td><td>66.09</td><td>28</td></tr><tr><td colspan="4">SEED</td></tr><tr><td rowspan="4">X-Class</td><td>BERT-base</td><td>87.17</td><td>37</td></tr><tr><td>BERT-large</td><td>87.94</td><td>39</td></tr><tr><td>RoBERTa-base</td><td>60.18</td><td>19</td></tr><tr><td>RoBERTa-large</td><td>46.78</td><td>13</td></tr><tr><td rowspan="4">ClassKG</td><td>BERT-base</td><td>89.80</td><td>40</td></tr><tr><td>BERT-large</td><td>83.52</td><td>38</td></tr><tr><td>RoBERTa-base</td><td>86.94</td><td>36</td></tr><tr><td>RoBERTa-large</td><td>93.17</td><td>41</td></tr><tr><td rowspan="4">NPPrompt</td><td>BERT-base</td><td>32.46</td><td>0</td></tr><tr><td>BERT-large</td><td>31.45</td><td>2</td></tr><tr><td>RoBERTa-base</td><td>74.93</td><td>32</td></tr><tr><td>RoBERTa-large</td><td>75.56</td><td>33</td></tr></table>
194
+
195
+ presentation to representations of the label words. A recent line of research on improving prompting for classification is to enlarge the set of label words to capture more diverse meanings of the classes, known as verbalizers, such as PET (Schick and Schütze, 2021), ProtoVerb (Ma et al., 2023), and KPT (Schick and Schütze, 2021; Ma et al., 2023; Hu et al., 2022). The notion of verbalizers is very similar to seed-words expansion in SEED methods. For example, X-Class and verbalizers both obtain a list of related words and use it to aggregate a class representation to replace the naive usage of label/seed word representation. Notably, the verbalizer methods require external supervision to find the related words, such as few-shot data (Schick and Schütze, 2021; Ma et al., 2023) or a knowledge base (Hu et al., 2022) to obtain the related word list,
196
+
197
+ Table 5: Performance of PROMPT and SEED methods when the choice of the pre-trained model is alternated.
198
+
199
+ <table><tr><td>Method</td><td>Model</td><td colspan="2">Average Rank Score</td></tr><tr><td>Prompt</td><td>GPT2-small</td><td>34.90</td><td>0</td></tr><tr><td>Prompt + clustering</td><td>GPT2-small</td><td>53.14</td><td>1</td></tr><tr><td>Prompt + DCPMI</td><td>GPT2-small</td><td>58.55</td><td>2</td></tr><tr><td>Prompt + DCPMI + clustering</td><td>GPT2-small</td><td>59.70</td><td>3</td></tr><tr><td>XClass (w/o clustering)</td><td>BERT-base</td><td>67.40</td><td>6</td></tr><tr><td>XClass (w clustering)</td><td>BERT-base</td><td>73.71</td><td>8</td></tr><tr><td>NPPrompt</td><td>roberta-base</td><td>62.75</td><td>4</td></tr><tr><td>NPPrompt + clustering</td><td>roberta-base</td><td>64.54</td><td>5</td></tr><tr><td>ClassKG</td><td>BERT-base</td><td>74.25</td><td>7</td></tr><tr><td>ClassKG + clustering</td><td>BERT-base</td><td>75.16</td><td>9</td></tr></table>
200
+
201
+ Table 6: Performance of PROMPT and SEED methods with and without the clustering post-processing.
202
+
203
+ while SEED methods detect related words through an unlabelled corpus. Both approaches could be useful under different input settings.
204
+
205
+ # 6.3 Unlabeled Corpus Clustering
206
+
207
+ Finally, a SEED method X-Class and a PROMPT method ProtoCal independently introduced a post-processing step by clustering on an unlabelled corpus, with the goal of obtaining a better decision boundary. X-Class clusters the text representations and initializes the clusters with the prior text-class similarity so that the clusters and classes are aligned. Protocal clusters the predicted likelihoods and align the clusters to classes by post-matching the cluster centers to the classes. We further explore the effect of the two clustering ideas, a summary is in Table 6 (Full table in Appendix I). We show that adding such a post-clustering process to various methods can almost freely (apart from an unlabeled corpus) improve the performance of different methods consistently for five different methods.
208
+
209
+ # 6.4 Implications
210
+
211
+ Given these connections between SEED and PROMPT methods and previous analysis on robustness, a natural extension is to analyze the cause of the stability issues on label/seed words and model differences. We presented one empirical analysis of the clustering step in X-Class and ProtoCal and show that this step can improve performance for various different methods talked about in the benchmark (Section 6.3). Further analysis on other components is left as future work. For example, one could reason that the introduction of related words makes the model less sensitive to the given label/seed words. This would require an exploration of the quality of the related words found by different SEED and verbalizer methods, and
212
+
213
+ whether the related words between methods can be used interchangeably.
214
+
215
+ # 7 Conclusions and Future Work
216
+
217
+ In this work, we introduce a benchmark to qualitatively evaluate different SEED and PROMPT approaches for extremely weakly supervised text classification. Through the benchmark, we raise awareness of the existence of SEED approaches, that are strong competitors to the more well-known zero-shot prompting (with calibrations). We also experiment on the robustness of these two approaches, and show that SEED are more tolerant to the given human guidance changes, however also being more selective to the pre-trained language models. We also analyzed the connections of SEED and PROMPT approaches through the lens of a few representative methods of the two approaches and showed that the methodologies are converging more recently. Finally, we also include a study on clustering as a calibration technique that was independently proposed for both approaches, and show that it can be a good performance booster.
218
+
219
+ We envision future work in two directions. The first one would be to understand the source of robustness difference and design a method that can take the best of both worlds (see Section 6.4). The other would be to scale up the experiments and test if the conclusions still hold for larger pre-trained language models.
220
+
221
+ # Limitations
222
+
223
+ Limitation of Model Scale The benchmark only included the evaluation of moderate-size language models and did not experiment on large language models. We justify our reasons in Section 4.6 and Appendix E and include an evaluation of ChatGPT in Appendix E, showing that even human feedback fine-tuned large language models is far from perfect on XWS-TC. However, we acknowledge that the current state of extremely weak supervision would be better understood and assessed if complete evaluations on state-of-the-art large language models, such as Instruct-GPT (Ouyang et al., 2022), PaLM (Chowdhery et al., 2022), and ChatGPT exist. While we lack the computational resources to perform such an evaluation, we hope this work can stimulate interest in XWS-TC and complete the study.
224
+
225
+ Limitation of Text Classification Another limitation is the scope of Text Classification. While
226
+
227
+ PROMPT and SEED methods have shown strong performances on text classification, this performance does not extend to other general classification tasks, such as natural language inference/entailment (Zhao et al., 2022).
228
+
229
+ # Ethics Statement
230
+
231
+ This paper establishes a benchmark for extremely weakly supervised text classification frameworks. We provide empirical results on various SEED and PROMPT methods, test their robustness, and analyze their connections. We give intuitions and insights on what method one should use for XwSTC in different circumstances. We believe that we are on the ethical side and do not find any ethical concerns in this work.
232
+
233
+ # References
234
+
235
+ Roee Aharoni and Yoav Goldberg. 2020. Unsupervised domain clusters in pretrained language models. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 7747-7763. Association for Computational Linguistics.
236
+ David M. Blei, Andrew Y. Ng, and Michael I. Jordan. 2003. Latent dirichlet allocation. J. Mach. Learn. Res., 3:993-1022.
237
+ Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual.
238
+ Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, Parker Schuh, Kensen Shi, Sasha Tsvyashchenko, Joshua Maynez, Abhishek Rao, Parker Barnes, Yi Tay, Noam Shazeer, Vinodkumar Prabhakaran, Emily Reif, Nan Du, Ben Hutchinson, Reiner Pope, James Bradbury, Jacob Austin, Michael Isard, Guy Gur-Ari, Pengcheng Yin, Toju Duke, Anselm Levskaya, Sanjay Ghemawat, Sunipa Dev, Henryk Michalewski, Xavier Garcia, Vedant Misra, Kevin Robinson, Liam Fedus, Denny Zhou, Daphne Ippolito, David Luan, Hyeontaek Lim,
239
+
240
+ Barret Zoph, Alexander Spiridonov, Ryan Sepassi, David Dohan, Shivani Agrawal, Mark Omernick, Andrew M. Dai, Thanumalayan Sankaranarayana Pillai, Marie Pellat, Aitor Lewkowycz, Erica Moreira, Rewon Child, Oleksandr Polozov, Katherine Lee, Zongwei Zhou, Xuezhi Wang, Brennan Saeta, Mark Diaz, Orhan Firat, Michele Catasta, Jason Wei, Kathy Meier-Hellstern, Douglas Eck, Jeff Dean, Slav Petrov, and Noah Fiedel. 2022. Palm: Scaling language modeling with pathways. CoRR, abs/2204.02311.
241
+ Pierre Colombo, Nathan Noiry, Ekhine Irurozki, and Stéphan Clémenton. 2022. What are the best systems? new perspectives on NLP benchmarking. CoRR, abs/2202.03799.
242
+ Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 4171-4186. Association for Computational Linguistics.
243
+ Tianyu Gao, Adam Fisch, and Danqi Chen. 2021. Making pre-trained language models better few-shot learners. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP 2021, (Volume 1: Long Papers), Virtual Event, August 1-6, 2021, pages 3816-3830. Association for Computational Linguistics.
244
+ Ariel Gera, Alon Halfon, Eyal Shnarch, Yotam Perlitz, Liat Ein-Dor, and Noam Slonim. 2022. Zero-shot text classification with self-training. CoRR, abs/2210.17541.
245
+ Zhixiong Han, Yaru Hao, Li Dong, and Furu Wei. 2022. Prototypical calibration for few-shot learning of language models. CoRR, abs/2205.10183.
246
+ Ari Holtzman, Peter West, Vered Shwartz, Yejin Choi, and Luke Zettlemoyer. 2021. Surface form competition: Why the highest probability answer isn't always right. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP 2021, Virtual Event / Punta Cana, Dominican Republic, 7-11 November; 2021, pages 7038-7051. Association for Computational Linguistics.
247
+ Shengding Hu, Ning Ding, Huadong Wang, Zhiyuan Liu, Jingang Wang, Juanzi Li, Wei Wu, and Maosong Sun. 2022. Knowledgeable prompt-tuning: Incorporating knowledge into prompt verbalizer for text classification. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2022, Dublin, Ireland, May 22-27, 2022, pages 2225-2240. Association for Computational Linguistics.
248
+
249
+ Ken Lang. 1995. Newsweeder: Learning to filter netnews. In ICML, pages 331-339. Morgan Kaufmann.
250
+ Chen Liang, Yue Yu, Haoming Jiang, Siawpeng Er, Ruijia Wang, Tuo Zhao, and Chao Zhang. 2020. BOND: bert-assisted open-domain named entity recognition with distant supervision. In KDD '20: The 26th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Virtual Event, CA, USA, August 23-27, 2020, pages 1054-1064. ACM.
251
+ Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized BERT pretraining approach. CoRR, abs/1907.11692.
252
+ Yao Lu, Max Bartolo, Alastair Moore, Sebastian Riedel, and Pontus Stenetorp. 2022. Fantastically ordered prompts and where to find them: Overcoming few-shot prompt order sensitivity. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2022, Dublin, Ireland, May 22-27, 2022, pages 8086-8098. Association for Computational Linguistics.
253
+ Ting Ma, Mingming Li, Shangwen Lv, Fuqing Zhu, Longtao Huang, and Songlin Hu. 2023. Conte: contextualized knowledge graph embedding for circular relations. Data Min. Knowl. Discov., 37(1):110-135.
254
+ Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analysis. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 142-150, Portland, Oregon, USA. Association for Computational Linguistics.
255
+ Dheeraj Mekala and Jingbo Shang. 2020. Contextualized weak supervision for text classification. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 323-333, Online. Association for Computational Linguistics.
256
+ Yu Meng, Jiaxin Huang, Guangyuan Wang, Zihan Wang, Chao Zhang, Yu Zhang, and Jiawei Han. 2020a. Discriminative topic mining via category-name guided text embedding. In WWW, pages 2121-2132. ACM / IW3C2.
257
+ Yu Meng, Jiaming Shen, Chao Zhang, and Jiawei Han. 2018. Weakly-supervised neural text classification. In Proceedings of the 27th ACM International Conference on Information and Knowledge Management, CIKM 2018, Torino, Italy, October 22-26, 2018, pages 983-992. ACM.
258
+ Yu Meng, Yunyi Zhang, Jiaxin Huang, Chenyan Xiong, Heng Ji, Chao Zhang, and Jiawei Han. 2020b. Text classification using label names only: A language model self-training approach. In Proceedings of the 2020 Conference on Empirical Methods in Natural
259
+
260
+ Language Processing, EMNLP 2020, Online, November 16-20, 2020, pages 9006-9017. Association for Computational Linguistics.
261
+ Sewon Min, Mike Lewis, Hannaneh Hajishirzi, and Luke Zettlemoyer. 2022. Noisy channel language model prompting for few-shot text classification. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2022, Dublin, Ireland, May 22-27, 2022, pages 5316-5330. Association for Computational Linguistics.
262
+ Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul F. Christiano, Jan Leike, and Ryan Lowe. 2022. Training language models to follow instructions with human feedback. CoRR, abs/2203.02155.
263
+ Seongmin Park and Jihwa Lee. 2022. LIME: weakly-supervised text classification without seeds. In Proceedings of the 29th International Conference on Computational Linguistics, COLING 2022, Gyeongju, Republic of Korea, October 12-17, 2022, pages 1083-1088. International Committee on Computational Linguistics.
264
+ Ethan Perez, Douwe Kiela, and Kyunghyun Cho. 2021. True few-shot learning with language models. In Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, December 6-14, 2021, virtual, pages 11054-11070.
265
+ Nicholas Carl Roberts, Xintong Li, Tzu-Heng Huang, Dyah Adila, Spencer Schoenberg, Cheng-Yu Liu, Lauren Pick, Haotian Ma, Aws Albarghouthi, and Frederic Sala. 2022. Autows-bench-101: Benchmarking automated weak supervision with 100 labels. CoRR, abs/2208.14362.
266
+ Gerard Salton and Chris Buckley. 1988. Term-weighting approaches in automatic text retrieval. Inf. Process. Manag., 24(5):513-523.
267
+ Victor Sanh, Albert Webson, Colin Raffel, Stephen Bach, Lintang Sutawika, Zaid Alyafeai, Antoine Chaffin, Arnaud Stiegler, Arun Raja, Manan Dey, M Saiful Bari, Canwen Xu, Urmish Thakker, Shanya Sharma Sharma, Eliza Szczechla, Taewoon Kim, Gunjan Chhablani, Nihal V. Nayak, Debajyoti Datta, Jonathan Chang, Mike Tian-Jian Jiang, Han Wang, Matteo Manica, Sheng Shen, Zheng Xin Yong, Harshit Pandey, Rachel Bawden, Thomas Wang, Trishala Neeraj, Jos Rozen, Abheesht Sharma, Andrea Santilli, Thibault Fevry, Jason Alan Fries, Ryan Teehan, Teven Le Scao, Stella Biderman, Leo Gao, Thomas Wolf, and Alexander M. Rush. 2022. Multitask prompted training enables zero-shot task generalization. In The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022. OpenReview.net.
268
+
269
+ Timo Schick and Hinrich Schütze. 2021. Exploiting cloze-questions for few-shot text classification and natural language inference. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, EACL 2021, Online, April 19 - 23, 2021, pages 255-269. Association for Computational Linguistics.
270
+ Jingbo Shang, Jialu Liu, Meng Jiang, Xiang Ren, Clare R. Voss, and Jiawei Han. 2018. Automated phrase mining from massive text corpora. IEEE Trans. Knowl. Data Eng., 30(10):1825-1837.
271
+ Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2019. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net.
272
+ Zihan Wang, Dheeraj Mekala, and Jingbo Shang. 2021. X-class: Text classification with extremely weak supervision. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2021, Online, June 6-11, 2021, pages 3043-3053. Association for Computational Linguistics.
273
+ Wenpeng Yin, Jamaal Hay, and Dan Roth. 2019. Benchmarking zero-shot text classification: Datasets, evaluation and entailment approach. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019, pages 3912-3921. Association for Computational Linguistics.
274
+ Jieyu Zhang, Yue Yu, Yinghao Li, Yujing Wang, Yaming Yang, Mao Yang, and Alexander Ratner. 2021a. WRENCH: A comprehensive benchmark for weak supervision. In Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks 1, NeurIPS Datasets and Benchmarks 2021, December 2021, virtual.
275
+ Lu Zhang, Jiandong Ding, Yi Xu, Yingyao Liu, and Shuigeng Zhou. 2021b. Weakly-supervised text classification based on keyword graph. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP 2021, Virtual Event / Punta Cana, Dominican Republic, 7-11 November, 2021, pages 2803-2813. Association for Computational Linguistics.
276
+ Xiang Zhang, Junbo Jake Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text classification. In NIPS, pages 649-657.
277
+ Xuandong Zhao, Siqi Ouyang, Zhiguo Yu, Ming Wu, and Lei Li. 2022. Pre-trained language models can be fully zero-shot learners. arXiv preprint arXiv:2212.06950.
278
+
279
+ Zihao Zhao, Eric Wallace, Shi Feng, Dan Klein, and Sameer Singh. 2021. Calibrate before use: Improving few-shot performance of language models. In Proceedings of the 38th International Conference on Machine Learning, ICML 2021, 18-24 July 2021, Virtual Event, volume 139 of Proceedings of Machine Learning Research, pages 12697-12706. PMLR.
280
+
281
+ # A Other SEED and PROMPT methods
282
+
283
+ More SEED methods. There are also other SEED methods that we will briefly describe here. WeST-Class (Meng et al., 2018) is one of the earlier weakly supervised methods that utilizes seed words to train a classifier by generating pseudoc Documents instead of generating pseudo-labels. Conwea (Mekala and Shang, 2020) explores the multi-sense of words and proposes to view seed words of different meanings as different words. Lime (Park and Lee, 2022) uses a fine-tuned model on a natural language inference dataset to suggest the seed words.
284
+
285
+ More PROMPT methods. There are also other post/pre-processing techniques that we will briefly describe here. ContextualCal (Zhao et al., 2021) and PromptOrder (Lu et al., 2022) work for incontext learning (in the few-shot scenario), and addresses the stability issue of the few-shot context in prompts. NosiyChannel (Min et al., 2022) considers the likelihood of generating the document based on the label, rather than generating the label based on the document.
286
+
287
+ # B Dataset Sources
288
+
289
+ The datasets are first introduced in the following papers:
290
+
291
+ - IMDB (Maas et al., 2011).
292
+ - Yelp-2, Yelp-5, AGNews, DBpedia Zhang et al. (2015)
293
+ 20News, 20News-Fine Lang (1995)2
294
+ - NYT-S, NYT-S-Fine,NYT, NYT-Loc Meng et al. (2020a)
295
+
296
+ # C Detailed instructions and Label/Seed Words
297
+
298
+ We provide Table 7 showing the instructions and label words used in the main experiment of the benchmark.
299
+
300
+ # D Comparing Pre-trained Language Models
301
+
302
+ We are aware that a similar number of parameters in language models do not directly imply similar abilities. We notice that the GPT-family LMs do tend to have a lower fine-tuning performance on natural language understanding tasks (Wang et al., 2019) when compared with BERT/RoBERTa. However,
303
+
304
+ we also notice that similar-sized GPT models do have a similar performance on zero-shot prompting as RoBERTa as observed in Table 8. Since we are comparing under an XWS setting, instead of fully supervised fine-tuning, we believe it is fair to compare similar-size GPT models and RoBERTa models. We do acknowledge that BERT might be at a disadvantage since RoBERTa is better than BERT at both fully supervised fine-tuning (Liu et al., 2019) and zero-shot prompting (Table 8). However, as we note in Section 5.2.3, certain SEED methods that work well on BERT might not be easily transferable to RoBERTa.
305
+
306
+ # E Excluding Large Language Models
307
+
308
+ We did not include large language models in this benchmark. Here, we elaborate on two specific reasons.
309
+
310
+ From the design purpose of the benchmark, the focus of the benchmark is to understand the strengths of different SEED and PROMPT methods, which would be fruitful for moderate businesses or individual persons to make decisions on which method to use for XWS-TC. Therefore, the analyses and comparisons on moderate-sized language models (100M - 300M parameters in the benchmark) would be more meaningful.
311
+
312
+ From a fair evaluation principle, all the models mentioned above are only developed for generative language models, which are not typically used for SEED approaches. Using a more powerful language model for one approach would defeat the purpose of a fair comparison between models. Further, fine-tuned language models have already seen many classification tasks same as or very similar to the datasets in this benchmark. Therefore, it would be hard to access the true performance of the methods, as the similarity of the fine-tuned tasks to the evaluation tasks becomes another factor.
313
+
314
+ We also include an evaluation of ChatGPT on the benchmark. It is hard to fairly evaluate such a model, since (1) we do not know how it is trained and whether it saw the datasets in the benchmark, and (2) there is no easy way to do large-scale evaluation. We decide to evaluate it on the dataset NYTS-Fine since we believe it is unlikely it is trained on such a fine-grained dataset. We pick 4 examples from each class resulting in total 104 examples. Since we can not retrieve the likelihoods, we embed the choice of classes in the prompt as follows: <instruction> <text> Answer:, where
315
+
316
+ <table><tr><td>Dataset</td><td>Instruction</td><td>Label Words/Seed Words</td></tr><tr><td>IMDB</td><td>review: &lt;text&gt; sentiment: &lt;label&gt;</td><td>positive; negative</td></tr><tr><td>Yelp-2</td><td>review: &lt;text&gt; sentiment: &lt;label&gt;</td><td>positive; negative</td></tr><tr><td>Yelp-5</td><td>review: &lt;text&gt; sentiment: &lt;label&gt;</td><td>excellent; good; average; bad; awful</td></tr><tr><td>AGNews</td><td>text: &lt;text&gt; topic: &lt;label&gt;</td><td>politics; sports; business; technology</td></tr><tr><td>20News</td><td>text: &lt;text&gt; topic: &lt;label&gt;</td><td>computer; sports; science; politics; religion</td></tr><tr><td>20News-Fine</td><td>text: &lt;text&gt; topic: &lt;label&gt;</td><td>atheism; graphics; Microsoft; IBM; Mac; motif; autos; motorcycles; baseball; hockey; encryption; electronics; medicine; space; Christian; guns; Arab</td></tr><tr><td>NYT-S</td><td>text: &lt;text&gt; topic: &lt;label&gt;</td><td>politics; art; business; science; sport</td></tr><tr><td>NYT-S-Fine</td><td>text: &lt;text&gt; topic: &lt;label&gt;</td><td>budget; gun; laws; gay; energy; environment; immigration; military; cosmos; insurance; stocks; bank; abortion; music; baseball; economy; television; golf; tennis; hockey; football; dance; movies; soccer; surveillance; basketball</td></tr><tr><td>NYT</td><td>text: &lt;text&gt; topic: &lt;label&gt;</td><td>business; politics; sports; health; education; estate; arts; science; technology</td></tr><tr><td>NYT-Loc</td><td>text: &lt;text&gt; location: &lt;label&gt;</td><td>America; Iraq; Japan; China; Britain; Russia; Germany; Canada; France; Italy</td></tr><tr><td>DBpedia</td><td>text: &lt;text&gt; topic: &lt;label&gt;</td><td>company; education; artist; athlete; politician; transportation; place; nature; village; species; plant; album; movie; book;</td></tr></table>
317
+
318
+ Table 7: Instructions, Label words, and Seed Words.
319
+
320
+ <table><tr><td>Method</td><td>Model</td><td>IMDB</td><td>Yelp-2</td><td>Yelp-5</td><td>AGNews</td><td>20News</td><td>20News-Fine</td><td>NYT-S</td><td>NYT-S-Fine</td><td>NYT</td><td>NYT-Loc</td><td>DBpedia</td><td>Average</td><td>Rank Score</td></tr><tr><td rowspan="6">Prompt</td><td>GPT2-small</td><td>56.42</td><td>47.36</td><td>7.62</td><td>38.42</td><td>36.32</td><td>28.76</td><td>22.45</td><td>38.90</td><td>33.44</td><td>60.32</td><td>13.93</td><td>34.90</td><td>1</td></tr><tr><td>BERT-base</td><td>42.16</td><td>35.48</td><td>7.59</td><td>68.89</td><td>50.35</td><td>3.78</td><td>49.94</td><td>39.96</td><td>37.88</td><td>38.49</td><td>17.71</td><td>35.67</td><td>0</td></tr><tr><td>RoBERTa-base</td><td>40.51</td><td>54.01</td><td>15.27</td><td>66.94</td><td>46.87</td><td>12.45</td><td>33.27</td><td>19.80</td><td>38.88</td><td>43.92</td><td>28.60</td><td>36.41</td><td>2</td></tr><tr><td>GPT2-medium</td><td>35.80</td><td>33.57</td><td>25.87</td><td>69.36</td><td>55.16</td><td>46.03</td><td>54.08</td><td>46.14</td><td>24.92</td><td>79.00</td><td>24.52</td><td>44.95</td><td>4</td></tr><tr><td>BERT-large</td><td>46.64</td><td>40.91</td><td>13.71</td><td>71.45</td><td>50.20</td><td>8.67</td><td>38.84</td><td>21.12</td><td>37.58</td><td>37.56</td><td>56.17</td><td>38.39</td><td>3</td></tr><tr><td>RoBERTa-large</td><td>86.87</td><td>90.54</td><td>25.75</td><td>76.72</td><td>44.89</td><td>5.21</td><td>33.09</td><td>16.29</td><td>44.89</td><td>59.95</td><td>39.03</td><td>47.57</td><td>5</td></tr></table>
321
+
322
+ Table 8: Performance of PROMPT methods with different pre-trained language models.
323
+
324
+ <instruction> is "Choose exactly one of the following classes that best describes the text. Just give the class name as answer, no explanations, nothing more." followed by the list of all class names.
325
+
326
+ ChatGPT is able to suggest a single-word answer within the set of 26 class names in 91 out of 104 questions; we were able to correct 3 of the 13 out-of-scope answers since they do contain the correct class name. After the correction, ChatGPT is correct on 71 out of 104 questions, making it a model with $68.27\%$ prediction accuracy. The results of X-Class on the same 104 questions is $57.69\%$ . This indicates that while ChatGPT is performing pretty well, there is still much room to improve, given that it is using a much larger language model than X-Class is.
327
+
328
+ # F Method Implementations
329
+
330
+ We use the public source implementation of different methods.
331
+
332
+ X-Class https://github.com/ZihanWangKi/XClass.
333
+
334
+ LoTClass https://github.com/yumeng5/LOTClass.
335
+
336
+ ClassKG https://github.com/ zhanglu-cst/ClassKG.
337
+
338
+ NPPrompt https://anonymous.4open.science/r/NPPrompt.
339
+
340
+ DCPMI https://github. com/peterwestuw/ surface-form-competition.
341
+
342
+ ProtoCal We implemented it ourselves.
343
+
344
+ # G Computation Costs
345
+
346
+ We ran experiments on A6000 and A5000 GPUs. The total estimated GPU hours is 600.
347
+
348
+ # H Full version of Table 5
349
+
350
+ We show Table 9, the detailed version of Table 5 that includes performances on individual datasets.
351
+
352
+ # I Full version of Table 6
353
+
354
+ We show Table 10, the detailed version of Table 6 that includes performances on individual datasets.
355
+
356
+ <table><tr><td>Method</td><td>Model</td><td>Yelp-2</td><td>AGNews</td><td>NYT-S</td><td>DBpedia</td><td>Average</td><td>Rank Score</td></tr><tr><td colspan="8">PROMPT</td></tr><tr><td rowspan="8">Prompt</td><td>GPT2-small</td><td>47.36</td><td>38.42</td><td>22.45</td><td>13.93</td><td>30.54</td><td>1</td></tr><tr><td>GPT2-medium</td><td>33.57</td><td>69.36</td><td>54.08</td><td>24.52</td><td>45.38</td><td>8</td></tr><tr><td>BERT-base</td><td>35.58</td><td>68.89</td><td>49.94</td><td>17.71</td><td>43.04</td><td>7</td></tr><tr><td>BERT-large</td><td>40.91</td><td>71.45</td><td>38.84</td><td>56.17</td><td>51.84</td><td>15</td></tr><tr><td>RoBERTa-base</td><td>54.01</td><td>66.94</td><td>33.27</td><td>28.60</td><td>45.71</td><td>6</td></tr><tr><td>RoBERTa-large</td><td>90.54</td><td>76.72</td><td>33.09</td><td>39.03</td><td>59.85</td><td>22</td></tr><tr><td>BART-base</td><td>68.93</td><td>52.02</td><td>36.11</td><td>16.61</td><td>43.42</td><td>4</td></tr><tr><td>BART-large</td><td>89.02</td><td>70.89</td><td>34.35</td><td>27.82</td><td>55.52</td><td>16</td></tr><tr><td rowspan="8">Prompt + DCPMI</td><td>GPT2-small</td><td>65.34</td><td>72.67</td><td>73.93</td><td>51.10</td><td>65.76</td><td>24</td></tr><tr><td>GPT2-medium</td><td>87.00</td><td>74.13</td><td>79.80</td><td>57.30</td><td>74.56</td><td>31</td></tr><tr><td>BERT-base</td><td>78.46</td><td>75.53</td><td>51.44</td><td>36.63</td><td>60.52</td><td>23</td></tr><tr><td>BERT-large</td><td>78.02</td><td>64.38</td><td>21.09</td><td>60.02</td><td>55.88</td><td>14</td></tr><tr><td>RoBERTa-base</td><td>67.73</td><td>59.61</td><td>30.96</td><td>30.24</td><td>47.14</td><td>5</td></tr><tr><td>RoBERTa-large</td><td>69.42</td><td>74.91</td><td>39.94</td><td>39.16</td><td>55.86</td><td>18</td></tr><tr><td>BART-base</td><td>34.83</td><td>45.53</td><td>49.68</td><td>14.66</td><td>36.18</td><td>3</td></tr><tr><td>BART-large</td><td>55.16</td><td>75.13</td><td>36.24</td><td>41.16</td><td>51.92</td><td>17</td></tr><tr><td rowspan="8">Prompt + ProtoCal</td><td>GPT2-small</td><td>65.89</td><td>72.66</td><td>53.69</td><td>51.97</td><td>61.05</td><td>21</td></tr><tr><td>GPT2-medium</td><td>88.60</td><td>75.26</td><td>51.97</td><td>64.46</td><td>70.07</td><td>30</td></tr><tr><td>BERT-base</td><td>75.91</td><td>65.72</td><td>44.65</td><td>36.68</td><td>55.74</td><td>11</td></tr><tr><td>BERT-large</td><td>78.18</td><td>66.45</td><td>57.51</td><td>78.52</td><td>70.16</td><td>25</td></tr><tr><td>RoBERTa-base</td><td>82.76</td><td>71.34</td><td>39.01</td><td>51.16</td><td>61.07</td><td>20</td></tr><tr><td>RoBERTa-large</td><td>92.13</td><td>78.95</td><td>43.29</td><td>49.97</td><td>66.09</td><td>28</td></tr><tr><td>BART-base</td><td>86.78</td><td>52.94</td><td>47.51</td><td>23.51</td><td>52.68</td><td>10</td></tr><tr><td>BART-large</td><td>92.18</td><td>73.89</td><td>50.73</td><td>50.83</td><td>66.91</td><td>27</td></tr><tr><td colspan="8">SEED</td></tr><tr><td rowspan="4">X-Class</td><td>BERT-base</td><td>85.44</td><td>81.81</td><td>91.94</td><td>89.50</td><td>87.17</td><td>37</td></tr><tr><td>BERT-large</td><td>90.39</td><td>85.91</td><td>87.53</td><td>87.91</td><td>87.94</td><td>39</td></tr><tr><td>RoBERTa-base</td><td>55.06</td><td>32.66</td><td>61.17</td><td>91.85</td><td>60.18</td><td>19</td></tr><tr><td>RoBERTa-large</td><td>38.58</td><td>23.91</td><td>50.72</td><td>73.89</td><td>46.78</td><td>13</td></tr><tr><td rowspan="4">ClassKG</td><td>BERT-base</td><td>92.21</td><td>88.10</td><td>84.12</td><td>94.75</td><td>89.80</td><td>40</td></tr><tr><td>BERT-large</td><td>93.10</td><td>87.30</td><td>80.95</td><td>72.74</td><td>83.52</td><td>38</td></tr><tr><td>RoBERTa-base</td><td>79.04</td><td>88.84</td><td>82.98</td><td>96.89</td><td>86.94</td><td>36</td></tr><tr><td>RoBERTa-large</td><td>97.13</td><td>88.20</td><td>91.30</td><td>96.04</td><td>93.17</td><td>41</td></tr><tr><td rowspan="4">NPPrompt</td><td>BERT-base</td><td>37.20</td><td>33.89</td><td>32.11</td><td>11.42</td><td>32.46</td><td>0</td></tr><tr><td>BERT-large</td><td>37.20</td><td>33.89</td><td>13.49</td><td>41.20</td><td>31.45</td><td>2</td></tr><tr><td>RoBERTa-base</td><td>81.17</td><td>80.42</td><td>77.76</td><td>60.36</td><td>74.93</td><td>32</td></tr><tr><td>RoBERTa-large</td><td>93.58</td><td>83.62</td><td>77.93</td><td>47.11</td><td>75.56</td><td>33</td></tr></table>
357
+
358
+ Table 9: This is the full version of Table 5, that includes the performance of PROMPT and SEED methods when the choice of the pre-trained model is alternated. PROMPT methods are evaluated on GPT2, BERT, BART, and RoBERTa, while SEED methods are evaluated on BERT and RoBERTa.
359
+
360
+ <table><tr><td>Method</td><td>Model</td><td>IMDB</td><td>Yelp-2</td><td>Yelp-5</td><td>AGNews</td><td>20News</td><td>20News-Fine</td><td>NYT-S</td><td>NYT-S-Fine</td><td>NYT</td><td>NYT-Loc</td><td>DBpedia</td><td>Average</td><td>Rank Score</td></tr><tr><td>Prompt</td><td>GPT2-small</td><td>56.42</td><td>47.36</td><td>7.62</td><td>38.42</td><td>36.32</td><td>28.76</td><td>22.45</td><td>38.90</td><td>33.44</td><td>60.32</td><td>13.93</td><td>34.90</td><td>0</td></tr><tr><td>Prompt + clustering</td><td>GPT2-small</td><td>70.35</td><td>65.89</td><td>23.77</td><td>72.66</td><td>58.62</td><td>36.77</td><td>53.69</td><td>29.82</td><td>55.15</td><td>65.80</td><td>51.97</td><td>53.14</td><td>1</td></tr><tr><td>Prompt + DCPMI</td><td>GPT2-small</td><td>70.13</td><td>65.34</td><td>23.01</td><td>72.67</td><td>61.64</td><td>37.45</td><td>73.93</td><td>63.19</td><td>55.20</td><td>70.40</td><td>51.10</td><td>58.55</td><td>2</td></tr><tr><td>Prompt + DCPMI + clustering</td><td>GPT2-small</td><td>70.38</td><td>65.84</td><td>27.58</td><td>78.08</td><td>62.40</td><td>41.94</td><td>82.21</td><td>36.88</td><td>58.74</td><td>63.97</td><td>68.64</td><td>59.70</td><td>3</td></tr><tr><td>XClass (w/o clustering)</td><td>BERT-base</td><td>73.79</td><td>83.49</td><td>27.48</td><td>72.05</td><td>74.09</td><td>55.35</td><td>85.76</td><td>55.93</td><td>68.57</td><td>82.37</td><td>62.48</td><td>67.40</td><td>6</td></tr><tr><td>XClass (w clustering)</td><td>BERT-base</td><td>82.89</td><td>85.44</td><td>28.80</td><td>81.81</td><td>76.98</td><td>58.78</td><td>91.94</td><td>61.06</td><td>67.19</td><td>86.38</td><td>89.50</td><td>73.71</td><td>8</td></tr><tr><td>NPPrompt</td><td>RoBERTa-base</td><td>85.19</td><td>81.17</td><td>14.20</td><td>80.42</td><td>68.92</td><td>48.64</td><td>77.76</td><td>55.23</td><td>64.46</td><td>53.85</td><td>60.36</td><td>62.75</td><td>4</td></tr><tr><td>NPPrompt + clustering</td><td>RoBERTa-base</td><td>84.84</td><td>82.99</td><td>14.48</td><td>83.12</td><td>70.42</td><td>50.44</td><td>91.84</td><td>44.10</td><td>62.22</td><td>54.17</td><td>71.32</td><td>64.54</td><td>5</td></tr><tr><td>ClassKG</td><td>BERT-base</td><td>88.08</td><td>92.21</td><td>32.33</td><td>88.10</td><td>81.72</td><td>52.29*</td><td>84.12</td><td>49.59*</td><td>60.79</td><td>92.81</td><td>94.75</td><td>74.25</td><td>7</td></tr><tr><td>ClassKG + clustering</td><td>BERT-base</td><td>88.86</td><td>92.65</td><td>40.59</td><td>87.19</td><td>80.95</td><td>54.51*</td><td>85.71</td><td>52.87*</td><td>56.75</td><td>91.44</td><td>95.20</td><td>75.16</td><td>9</td></tr></table>
361
+
362
+ Table 10: This is the full version of Table 6 that contains the performance of PROMPT and SEED methods with and without the clustering post-processing.
363
+
364
+ A For every submission:
365
+
366
+ A1. Did you describe the limitations of your work?
367
+
368
+ Last page
369
+
370
+ A2. Did you discuss any potential risks of your work?
371
+
372
+ Last page
373
+
374
+ A3. Do the abstract and introduction summarize the paper's main claims?
375
+
376
+ Yes, first page
377
+
378
+ A4. Have you used AI writing assistants when working on this paper?
379
+
380
+ Left blank.
381
+
382
+ B Did you use or create scientific artifacts?
383
+
384
+ Sec 4.1
385
+
386
+ B1. Did you cite the creators of artifacts you used?
387
+
388
+ Appendix B
389
+
390
+ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
391
+
392
+ They are open-soruced
393
+
394
+ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
395
+
396
+ They are open-sourced for text classification evaluation.
397
+
398
+ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
399
+
400
+ The dataset are not collected by us and is open-sourced.
401
+
402
+ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
403
+
404
+ Sec 4.1
405
+
406
+ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
407
+
408
+ Sec 4.1
409
+
410
+ C Did you run computational experiments?
411
+
412
+ Sec 5, 6
413
+
414
+ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used?
415
+
416
+ Appendix $F,G$
417
+
418
+ The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
419
+
420
+ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
421
+
422
+ Sec 4.4
423
+
424
+ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
425
+
426
+ Sec 4.2
427
+
428
+ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)?
429
+
430
+ Not applicable. Left blank.
431
+
432
+ D Did you use human annotators (e.g., crowdworkers) or research with human participants?
433
+
434
+ Left blank.
435
+
436
+ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
437
+
438
+ No response.
439
+
440
+ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)?
441
+
442
+ No response.
443
+
444
+ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
445
+
446
+ No response.
447
+
448
+ D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
449
+
450
+ No response.
451
+
452
+ D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
453
+
454
+ No response.
2023/A Benchmark on Extremely Weakly Supervised Text Classification_ Reconcile Seed Matching and Prompting Approaches/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:04d1252787fe2ee0fabba56215e88c442dd0d8b491f0e8016e6fef39c2e18cf3
3
+ size 1142890
2023/A Benchmark on Extremely Weakly Supervised Text Classification_ Reconcile Seed Matching and Prompting Approaches/layout.json ADDED
The diff for this file is too large to render. See raw diff
 
2023/A Call for Standardization and Validation of Text Style Transfer Evaluation/ab706f98-d210-4683-ba07-c1e317949d40_content_list.json ADDED
The diff for this file is too large to render. See raw diff
 
2023/A Call for Standardization and Validation of Text Style Transfer Evaluation/ab706f98-d210-4683-ba07-c1e317949d40_model.json ADDED
The diff for this file is too large to render. See raw diff
 
2023/A Call for Standardization and Validation of Text Style Transfer Evaluation/ab706f98-d210-4683-ba07-c1e317949d40_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:305626e76eaabdc3f1b7f0f52d6a92b290a96808ff0f3c3b32a59f9da3a9e3c3
3
+ size 439133
2023/A Call for Standardization and Validation of Text Style Transfer Evaluation/full.md ADDED
The diff for this file is too large to render. See raw diff
 
2023/A Call for Standardization and Validation of Text Style Transfer Evaluation/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:40dbfabd1f4dceeaf54d59874c38ad1c1bcd0206489f48a478e5da0a091abe3f
3
+ size 915751
2023/A Call for Standardization and Validation of Text Style Transfer Evaluation/layout.json ADDED
The diff for this file is too large to render. See raw diff
 
2023/A Class-Rebalancing Self-Training Framework for Distantly-Supervised Named Entity Recognition/5d2ab272-62cd-4b5d-82f2-5c5c490fd685_content_list.json ADDED
The diff for this file is too large to render. See raw diff
 
2023/A Class-Rebalancing Self-Training Framework for Distantly-Supervised Named Entity Recognition/5d2ab272-62cd-4b5d-82f2-5c5c490fd685_model.json ADDED
The diff for this file is too large to render. See raw diff
 
2023/A Class-Rebalancing Self-Training Framework for Distantly-Supervised Named Entity Recognition/5d2ab272-62cd-4b5d-82f2-5c5c490fd685_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fb8e1449e41145b35b155b71cc10b02a79dac7b0da197b5da5aa06f80d1a345c
3
+ size 11768300
2023/A Class-Rebalancing Self-Training Framework for Distantly-Supervised Named Entity Recognition/full.md ADDED
@@ -0,0 +1,488 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # A Class-Rebalancing Self-Training Framework for Distantly-Supervised Named Entity Recognition
2
+
3
+ Qi Li $^{12}$ , Tingyu Xie $^{12}$ , Peng Peng $^{2}$ , Hongwei Wang $^{*12}$ and Gaoang Wang $^{*12}$
4
+
5
+ $^{1}$ College of Computer Science and Technology, Zhejiang University, China $^{2}$ ZJU-UIUC Institute, Zhejiang University, China
6
+
7
+ liqi177@zju.edu.cn, 11921049@zju.edu.cn, pengpeng@intl.zju.edu.cn
8
+
9
+ hongweiwang@zju.edu.cn, gaoangwang@intl.zju.edu.cn
10
+
11
+ # Abstract
12
+
13
+ Distant supervision reduces the reliance on human annotation in the named entity recognition tasks. The class-level imbalanced distant annotation is a realistic and unexplored problem, and the popular method of self-training can not handle class-level imbalanced learning. More importantly, self-training is dominated by the high-performance class in selecting candidates, and deteriorates the low-performance class with the bias of generated pseudo label. To address the class-level imbalance performance, we propose a class-rebalancing self-training framework for improving the distantly-supervised named entity recognition. In candidate selection, a class-wise flexible threshold is designed to fully explore other classes besides the high-performance class. In label generation, injecting the distant label, a hybrid pseudo label is adopted to provide straight semantic information for the low-performance class. Experiments on five flat and two nested datasets show that our model achieves state-of-the-art results. We also conduct extensive research to analyze the effectiveness of the flexible threshold and the hybrid pseudo label.
14
+
15
+ # 1 Introduction
16
+
17
+ The named entity recognition (NER) task recognizes the location and classification of the named entity. To reduce the reliance on the human annotation of the supervised NER, some works turn to distant supervision to generate large-scale labeled data automatically (Li et al., 2021; Zhou et al., 2022; Jie et al., 2019). Distant supervision is to match the words in sentences with labeled concepts in the collected knowledge bases (Liang et al., 2020). The distantly-labeled data obtained from rule-based matching is accompanied by noisy labels. Previous works in distant supervision mainly focus on the unlabeled entity (Liang et al., 2020; Li et al., 2021) and mislabeled entity (Zhang et al., 2021c).
18
+
19
+ ![](images/1b69f423163badedd746325060f2ecc5a9710027aff8d8ca612da87791ebdf31.jpg)
20
+ (a)
21
+
22
+ ![](images/269df38ced5672adabe38cc456102230d77efead2545965b9e2b11278a8d528b.jpg)
23
+ (b)
24
+
25
+ ![](images/ff2941c139a0529cc21b5e1e614941cc092d26785722c5bf3f6027516ed31f4b.jpg)
26
+ (c)
27
+ Figure 1: The analysis among all entity classes on the CoNLL03 DS-NER benchmark. Green bars represent the class-wise statistics of the distantly-labeled training set. Red bars represent the class-level performance in self-training. (1a) The distant annotation shows different qualities among different classes. (1b) In SCDL, the recall is larger than the precision only in the high-performance class (Class PER, person). (1c) In RoSTER, the low-performance class (Class MISC, miscellaneous) shows performance degradation after self-training.
28
+
29
+ The class-level imbalanced distant annotation has been underestimated in the distantly supervised named entity recognition (DS-NER), where the distant label of the entity class varies in quality, as shown in Figure 1a. More specifically, the class-wise quality of the distant label depends on the coverage of class-related knowledge bases, and it is hard for the knowledge bases to include all the entities of the semantic-rich class comprehensively. The entity class with the high-quality distant annotation induces the high-performance class, and the low-quality distant annotation is related to the low-performance class.
30
+
31
+ While self-training (Hinton et al., 2015) is an effective method in the DS-NER task (Liang et al., 2020; Zhang et al., 2021b; Meng et al., 2021; Zhang et al., 2021c), they have not been thoroughly evaluated on the class-level imbalanced learning. Self-training uses the prediction of the model itself to train again, and effectively uncovers the unlabeled entity. The following works study the mislabeled entity from two aspects: candidate selection and label generation. For example, SCDL (Zhang et al.,
32
+
33
+ 2021c) selects consistent and high-confidence data for model training; RoSTER (Meng et al., 2021) generates pseudo labels with the prediction of the contextualized augmented data. However, the initial model in self-training is trained on noisy data and is biased toward the high-performance class, then the subsequent training intensifies the bias and deteriorates the low-performance class, as shown in Figure 1.
34
+
35
+ In Figure 1b, the selected candidates are dominated by the high-performance class, as the recall is larger than the precision only in the high-performance class. This tendency selection can improve the generalization of the high-performance class, but impair the exploration of other low-performance classes. Actually, a predefined constant threshold struggles to handle the difference in class-wise learning ability (Zhang et al., 2021a), and limits the model to only focus on the high-performance class. In Figure 1c, the generated pseudo label fails to explore the low-performance class during self-training, as the performance degradation occurs in the low-performance class. When the generated pseudo label from the biased model misleads the semantic information of the low-performance class, the iterative update with the guide of pseudo label expands the negative impacts in the low-performance class (Wei et al., 2021).
36
+
37
+ In this work, we propose a unified self-training framework, called CLIM, to address the class-level imbalance learning in the DS-NER task. For the dominance from the high-performance class, we calculate the current learning ability for each entity class, and adjust the class-wise threshold to improve the candidate selection. For the degradation in the low-performance class, we leverage the semantic information in the distantly-labeled entities, and generate a hybrid pseudo label to improve the label generation. The above two parts of candidate selection and label generation are mutually beneficial. The generated hybrid pseudo label improves the feature capture for the low-performance class by injecting the distant label. And better feature representation improves the exploration of the low-performance class, as more candidates from the low-performance class are selected through the class-wise threshold. The contributions are as follows:
38
+
39
+ (1) The novel class-rebalancing self-training proposed in this work addresses the imbalance problem in the high-performance and low-performance
40
+
41
+ classes by improving the candidate selection and label generation.
42
+
43
+ (2) Our method achieves state-of-the-art results on five flat and two nested datasets, and the exhaustive experimental analysis demonstrates the feasibility of addressing the class-level imbalance learning.
44
+ (3) Our work with the span-based schema extends the DS-NER task to the nested case, where two noisy nested datasets are additionally generated.
45
+
46
+ # 2 Related Work
47
+
48
+ DS-NER with Self-training. To address the noise interference in the distantly labeled data, the previous works make the strong assumption that no unlabeled entity exists during the distant supervision, and mainly focus on the unlabeled entity (Chen et al., 2021; Zhou et al., 2022; Peng et al., 2019; Cao et al., 2019; Shang et al., 2018; Liang et al., 2020). Among them, self-training shows the effectiveness of uncovering unlabeled entities (Liang et al., 2020; Zhang et al., 2021b). On this basis, some works improve self-training to solve the unlabeled entity, from the two aspects of the candidate selection (Zhang et al., 2021c) and label generation (Meng et al., 2021). However, they take no consideration into the class-level imbalanced performance. The model is biased toward the high-performance class, and the subsequent training intensifies this imbalanced tendency. More importantly, this tendency significantly weakens the exploration of the low-performance class. In this way, our work advances self-training to tackle the class-level imbalanced learning.
49
+
50
+ Self-Training with Data Augmentation. Self-training (Hinton et al., 2015) consists of both candidate selection and label generation. Specifically, self-training only selects candidate whose largest class probability fall above a predefined threshold; the generated pseudo label comes from the prediction of the model itself. Referred to self-training in the semi-supervised learning (Sohn et al., 2020; Xie et al., 2020), the perturbed inputs with different augmentation is used to decouple the similar predictions on the same input. And also, this data augmentation improves the model robustness and achieves competitive performance (Gao et al., 2021; Chen et al., 2021). Different from the previous works that focus on the classification task with the external task-relevant unlabeled data, our
51
+
52
+ work extends augmentation-driven self-training to the named entity recognition task with only noisy data.
53
+
54
+ # 3 Preliminary
55
+
56
+ Task Definition. Given an input sentence $\pmb{x} = [x_{1}, x_{2}, \dots, x_{n}]$ of $n$ tokens, the NER task aims to detect all the entities of different types. Let $s = \{s_{1}, s_{2}, \dots, s_{k}\}$ be the set of possible spans in $\pmb{x}$ . The task of span-based NER is, for each span $s_{i} \in s$ , to produce its label $y_{i} \in \mathcal{E} \cup \{\epsilon\}$ , where $\epsilon$ is the non-entity span<sup>1</sup>, and $\mathcal{E}$ is the set of pre-defined entity classes. Denote the distantly-supervised NER dataset as $D = \{(x_{m}, y_{m})\}_{m=1}^{M}$ . And $y_{m}$ is the set of distantly labeled spans, which includes mislabeled entities.
57
+
58
+ Backbone. For the contextual span representation $H(s_i) = \left[\mathbf{x}_{\mathrm{START}(i)};\mathbf{x}_{\mathrm{END}(i)};\phi (s_i)\right]$ , $\mathbf{x}_{\mathrm{START}(i)}$ and $\mathbf{x}_{\mathrm{END}(i)}$ is the embedding of the start and end token in span $s_i$ , $\phi (s_i)$ is the span width embedding with the random initialization. And the output of classifier $G_{\theta}$ is the probability distribution over entity classes, which is formulated as $F_{\theta}(s_i) = G_{\theta}(H(s_i))\in \mathbb{R}^C$ . Among them, $\theta$ represents the learnable parameters, and $C$ is the number of entity classes. For simplicity, the probability distribution $F_{\theta}(s_i)$ is represented as $p_i$ .
59
+
60
+ Augmentation-Driven Self-Training. The general self-training leverages the model itself to obtain pseudo labels with the loss function: $\mathcal{L} = \frac{1}{N}\sum_{i=1}^{N}\mathbb{1}(\max p_i\geq \tau)\mathrm{CE}(\hat{p}_i,p_i)$ . Among them, $N = |s|$ , $\tau$ is the upper bound threshold, CE is the cross-entropy function. And $\hat{p}_i$ is the generated one-hot pseudo label, representing the class arg max $p_i$ .
61
+
62
+ With the driven of the data augmentation, the random mask with two different probabilities is used to augment the same input in the attention matrix, which are represented as the strongly-augmented data $\mathcal{S}(s_i)$ and weakly-augmented data $\mathcal{W}(s_i)$ . The strong augmentation function $\mathcal{S}$ is implemented with high masking probability to predict the probability distribution over classes, and the weak augmentation $\mathcal{W}$ is related to the low probability to derive the pseudo label. The loss function
63
+
64
+ in self-training thereby has the form:
65
+
66
+ $$
67
+ \mathcal {L} = \frac {1}{N} \sum_ {i = 1} ^ {N} \mathbb {1} (\max p _ {w i} \geq \tau) \mathrm {C E} (\hat {p} _ {w i}, p _ {s i}), \tag {1}
68
+ $$
69
+
70
+ where $p_{si} = F_{\theta}\left(\mathcal{S}(s_i)\right)$ , $p_{wi} = F_{\theta}\left(\mathcal{W}(s_i)\right)$ . And $\hat{p}_{wi}$ is the generated one-hot label, representing the class arg max $p_{wi}$ .
71
+
72
+ # 4 CLIM
73
+
74
+ We advance self-training to tackle the class-level imbalance learning, with more detailed consideration in the candidate selection and label generation. The overview of our framework is illustrated in Figure 2, and the training algorithm is shown in Algorithm 1.
75
+
76
+ # 4.1 Flexible Threshold in Candidate Selection
77
+
78
+ To alleviate the dominance from the high-performance class, we improve the candidate selection in self-training, by adjusting the threshold for each class. In previous work (Zhang et al., 2021c), the constant threshold is biased towards the high-performance class, where the high-confidence class accounts for the majority of the selected candidates. And the low-performance classes can not be sufficiently explored during self-training, as the constant threshold masks out the samples of these low-performance classes. Therefore, we calculate the current learning ability for each entity class, and adjust the class-wise threshold dynamically to select the candidate. The basic idea agrees with curriculum learning (Zhang et al., 2021a), where candidates are gradually selected according to their learning ability.
79
+
80
+ The learning ability $\sigma_{c}$ of an entity class $c$ can be reflected by the number of entities whose prediction falls into the entity class $N_{c} = \mathbb{1}(\arg \max p_{wi} = c)$ and above the threshold $N_{\mathcal{T}} = \mathbb{1}(\max p_{wi} > \mathcal{T}(c))$ , which is formulated as:
81
+
82
+ $$
83
+ \sigma_ {c} = \sum_ {\boldsymbol {x} \in D} \sum_ {s _ {i} \in s} N _ {\mathcal {T}} \cdot N _ {c}. \tag {2}
84
+ $$
85
+
86
+ Then the class-wise flexible thresholds $\mathcal{T}(c)$ is formulated as
87
+
88
+ $$
89
+ \mathcal {T} (c) = \mathcal {M} (\beta (\sigma_ {c})) \cdot \tau . \tag {3}
90
+ $$
91
+
92
+ First, to reduce the bias of parameter initialization at the early stage, the warm-up process $\beta (\sigma_c) = \sigma_c / \max \left\{\max_{c'}\sigma_{c'},\mathcal{N} - \sum_{c'}\sigma_{c'}\right\}$ is designed, where $c^{\prime}$ enumerates all entity classes
93
+
94
+ ![](images/697f290fbc83eaa6e015c1ab473337ce4ae1d116ae66c3335600aea95aa1d228.jpg)
95
+ Figure 2: Overview of the proposed framework. Span-level probability distributions are produced, including the strongly-augmented (former) and weakly-augmented (latter) sample. The former is the prediction in the loss computation. In Part 1, the latter is used to select the candidate, of which the confidence is above the class-wise threshold. And also, the latter is converted to the generated one-hot pseudo label in Part 2, and the distant label is further introduced for the generation of the hybrid pseudo label.
96
+
97
+ and $\mathcal{N}$ represents the number of labeled entities in the distantly-labeled training set. Second, the non-linear mapping function $\mathcal{M}(x) = x / (2 - x)$ is designed to make $x$ be more sensitive to a large value and vice versa. In addition, we specially consider the pseudo labeling for the non-entity spans $\epsilon$ , since the non-entity spans take the majority of the span set $s$ . And we set $\mathcal{T}(c = \epsilon)$ with the same value as the upper bound threshold $\tau$ , to filter out non-entity spans $\epsilon$ in the early stage. With the class-wise threshold, we update the selection strategy with:
98
+
99
+ $$
100
+ \mathbb {1} \left(\max p _ {w i} > \mathcal {T} (\arg \max p _ {w i})\right). \tag {4}
101
+ $$
102
+
103
+ Further, a re-weighting strategy by inversing the class-wise threshold is employed on each span. The coefficient of the span is defined as:
104
+
105
+ $$
106
+ \alpha (c _ {i}) = 2 - \mathcal {T} (c _ {i}), \tag {5}
107
+ $$
108
+
109
+ where $c_{i} = \arg \max p_{wi}$ . And we also set the value of $\alpha (c = \epsilon)$ as the upper bound threshold $\tau$ , to reduce the attention in the predominant non-entity span.
110
+
111
+ # 4.2 Distant supervision in Label Generation
112
+
113
+ To tackle the degradation in the low-performance class, we advance the label generation, by injecting the semantic information of the distant label. The previous DS-NER work (Meng et al., 2021) leverages the prediction of the model itself to produce the pseudo label. Nevertheless, the model tends to capture information from the high-performance class, and the semantic information captured by the
114
+
115
+ model is severely limited for the low-performance class. Thus the prediction based on the model causes a negative influence on the low-performance class, and the iterative update further expands this negative impact.
116
+
117
+ The hybrid pseudo label, injecting the distant label, can extraordinarily alleviate the capturing limitation for the low-performance class. More specifically, the distantly-labeled entities from the knowledge base contain useful information, since these knowledge bases are finely collected for the specific entity classes. Finally, the hybrid pseudo label is formulated as follows:
118
+
119
+ $$
120
+ h _ {w i} = \lambda_ {p} \hat {p} _ {w i} + \lambda_ {y} y _ {i}, \tag {6}
121
+ $$
122
+
123
+ where $y_{i}$ is the distant label of span $s_i^2$
124
+
125
+ In different training stages, the model pays different attention to these labels. In the early stage, the model obtains entity features mainly from the distantly-labeled data. When the pseudo label with high confidence is generated, the model is more sensitive to the potential entity behind the noisy training data. Therefore, we dynamically adjust the weights of the distant label $y_{i}$ and the pseudo label $\hat{p}_{wi}$ . Then the dynamic weighting is formulated as follows:
126
+
127
+ $$
128
+ \lambda_ {y} = \left(\cos \left(0. 5 \cdot \pi (\hat {t} + 1)\right) + 1\right) ^ {2}, \tag {7}
129
+ $$
130
+
131
+ $$
132
+ \lambda_ {p} = \left(\sin \left(0. 5 \cdot \pi (\hat {t} - 1)\right) + 1\right) ^ {2}, \tag {8}
133
+ $$
134
+
135
+ # Algorithm 1 CLIM Training Algorithm
136
+
137
+ Input: Maximum iteration $T$ ; Training set $\{(\pmb{x}_m, \pmb{y}_m)\}_{m=1}^M$ .
138
+
139
+ 1: Initialize $\sigma_0(c) = 0$
140
+ 2: while $t = 1, 2, \dots, T$ do
141
+ 3: Generate $\pmb{s} = \{s_1, s_2, \dots, s_i, \dots\}$ from $\pmb{x}_m$ .
142
+ 4: Calculate $p_{si}$ and $p_{wi}$ with different augmentation.
143
+ 5: for $c$ in $\mathcal{E}$ do
144
+ 6: Update threshold $\mathcal{T}(c)$ via Eq. 3.
145
+ 7: Update learning ability $\sigma_{c}$ via Eq. 2.
146
+ 8: end for
147
+ 9: Selecte candidate via Eq. 4.
148
+ 10: Calculate coefficient $\alpha (c_{i})$ via Eq. 5.
149
+ 11: Generate hybrid pseudo label $h_{wi}$ via Eq. 6.
150
+ 12: Back-propagation $\mathcal{L}$ via Eq. 9.
151
+ 13: end while
152
+
153
+ Output: Model parameters.
154
+
155
+ where $\hat{t} = t / t_{total}\in [0,1],t_{total}$ is the hyperparameter of total training steps.
156
+
157
+ Finally, integrating the above two advanced components, the loss function in CLIM is represented as:
158
+
159
+ $$
160
+ \mathcal {L} = \frac {1}{N} \sum_ {i = 1} ^ {N} \left[ \mathbb {1} \left(\max p _ {w i} > \mathcal {T} \left(c _ {i}\right)\right) \cdot \right. \tag {9}
161
+ $$
162
+
163
+ $$
164
+ \left. \alpha \left(c _ {i}\right) \mathrm {C E} \left(h _ {w i}, p _ {s i}\right) \right].
165
+ $$
166
+
167
+ # 5 Experiment
168
+
169
+ # 5.1 Experimental Setup
170
+
171
+ Dataset. We evaluate on five flat benchmarks, including CoNLL03 (Tjong Kim Sang and De Meulder, 2003), Tweet (Godin et al., 2015), OntoNotes5.0 (Weischedel et al., 2013), Wikigold (Balasuriya et al., 2009), and Webpage (Ratinov and Roth, 2009). And we also implement two nested benchmarks, including ACE2004 (Doddington et al., 2004) and ACE2005 (Walker et al., 2006). For the flat case, the distant label is generated by matching entities in external knowledge bases, following BOND (Liang et al., 2020). For the nested case, the details of the distant label generation are described in Appendix C. Besides, the dataset statistics are provided in Appendix D.
172
+
173
+ Baseline. First, KB Matching is provided as the reference of the distant supervision quality. Second, we compare our method with the competitive baselines from the following two aspects.
174
+
175
+ (1) No Labeling Denoising. With the combination of the pre-trained language model RoBERTa (Liu et al., 2019) and classifier, both token-based (RoBERTa-Token) and span-based schema (RoBERTa-Span) are implemented.
176
+
177
+ (2) Labeling Denoising. In this part, we classify these baselines according to whether a self-learning process is used or not. On the one hand, AutoNER (Shang et al., 2018) designs modified tagging scheme, LRNT (Cao et al., 2019) uses Partial-CRFs with the non-entity sample strategy, Co-Teaching (Yu et al., 2019) adopts a advanced sampling strategy, Comf-MPU (Zhou et al., 2022) employs a multi-class positive and unlabeled learning method. On the other hand, the works with the self-training strategy are used as the strong baseline. BOND (Liang et al., 2020) basically implements the self-training with the teacher-student framework. BA-CIR (Zhang et al., 2021b) introduces the casual intervention into the self-training. With the schema of ensemble learning, SCDL (Zhang et al., 2021c) and RoSTER (Meng et al., 2021) study the unlabeled entity from the candidate selection and the label generation, respectively.
178
+
179
+ Implementation Detail. For fair comparison, the main result is the average value of 5 runs. We implement our code<sup>3</sup> with PyTorch based on huggingface Transformers<sup>4</sup>, and employ the base-size RoBERTa (Liu et al., 2019) to obtain the contextual representation. In addition, the specific experimental settings are listed as follows: the maximum masking probability is 0.05 for the weakly-augmented sample, and 0.2 for the strongly-augmented sample; $\mathcal{T}(c = \epsilon)$ and $\alpha(c = \epsilon)$ are set to 0.9, and the confident threshold $\tau$ is set to 0.9; a cosine learning rate decay schedule with no warm-up step and 4 hard restarts is employed; the optimizer is AdamW with $\beta_{1} = 0.9$ and $\beta_{2} = 0.999$ ; the training batch size is 16, the maximum sequence length is 128. And more implementation details are listed in Appendix E.
180
+
181
+ # 5.2 Main Result
182
+
183
+ Flat Distantly Labeled NER Task. The span F1 scores on the flat case are listed in Table 1. Our method achieves SOTA results on all five benchmarks. Meanwhile, we conclude the results with the following aspects. (1) For non-denoising methods (the second part of Table 1), the span-based method (RoBERTa-Span) exhibits superior performance over the token-based method (RoBERTa-Token), implying the effectiveness of the span-based schema in DS-NER. (2) For denoising methods (the third part of Table 1), the models with
184
+
185
+ <table><tr><td colspan="2"></td><td>CoNLL03</td><td>Tweet</td><td>OntoNote5.0</td><td>Webpage</td><td>Wikigold</td><td>Average</td></tr><tr><td></td><td>KB Matching‡</td><td>0.714</td><td>0.358</td><td>0.595</td><td>0.525</td><td>0.478</td><td>0.534</td></tr><tr><td rowspan="2">No Lable Denoising</td><td>RoBERTa-Token*</td><td>0.759</td><td>0.465</td><td>0.682</td><td>0.610</td><td>0.526</td><td>0.608</td></tr><tr><td>RoBERTa-Span†</td><td>0.781</td><td>0.525</td><td>0.691</td><td>0.628</td><td>0.526</td><td>0.630</td></tr><tr><td rowspan="4"></td><td>AutoNER‡ (Shang et al., 2018)</td><td>0.670</td><td>0.261</td><td>0.672</td><td>0.514</td><td>0.475</td><td>0.518</td></tr><tr><td>LRNT‡ (Cao et al., 2019)</td><td>0.697</td><td>0.238</td><td>0.677</td><td>0.477</td><td>0.462</td><td>0.510</td></tr><tr><td>Co-Teaching‡ (Yu et al., 2019)</td><td>0.764</td><td>0.467</td><td>0.680</td><td>0.584</td><td>0.521</td><td>0.603</td></tr><tr><td>Conf-MPU (Zhou et al., 2022)</td><td>0.800</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td rowspan="5">Lable Denoising</td><td>BOND (Liang et al., 2020)</td><td>0.815</td><td>0.480</td><td>0.684</td><td>0.657</td><td>0.601</td><td>0.647</td></tr><tr><td>BA-CIR (Zhang et al., 2021b)</td><td>0.815</td><td>0.490</td><td>-</td><td>0.647</td><td>0.615</td><td>-</td></tr><tr><td>RoSTER (Meng et al., 2021)</td><td>0.854</td><td>0.445†</td><td>0.696†</td><td>0.544†</td><td>0.678</td><td>0.643</td></tr><tr><td>SCDL (Zhang et al., 2021c)</td><td>0.837</td><td>0.511</td><td>0.686</td><td>0.685</td><td>0.641</td><td>0.672</td></tr><tr><td>CLIM (Ours)</td><td>0.854</td><td>0.538</td><td>0.696</td><td>0.700</td><td>0.679</td><td>0.693</td></tr></table>
186
+
187
+ Table 1: The main results in the flat DS-NER task, via span F1 scores. The baseline marked with $\ddagger$ is referred to (Liang et al., 2020), and the baseline marked with $\star$ is referred to (Zhang et al., 2021c). The baselines and results marked with $\dagger$ are our own runs. The best results are marked in bold.
188
+
189
+ <table><tr><td></td><td>ACE04</td><td>ACE05</td></tr><tr><td>KB Matching</td><td>0.711</td><td>0.708</td></tr><tr><td>RoBERTa-Span</td><td>0.770</td><td>0.768</td></tr><tr><td>Tea-Stu (span-based)</td><td>0.782</td><td>0.791</td></tr><tr><td>Ensemble (span-based)</td><td>0.819</td><td>0.819</td></tr><tr><td>CLIM (Ours)</td><td>0.831</td><td>0.822</td></tr></table>
190
+
191
+ self-training (BOND, BA-CIR, RoSTER, SCDL, and Ours) show better performance than other denoising methods, reflecting the superiority of self-learning methods in DS-NER. (3) Compared with the strong baseline RoSTER, our model shows better robustness among various data settings. (4) In extremely noisy data, our model significantly outperforms other methods. In the Tweet datasets with low KB matching values, our model boosts span F1 scores by $2.2\%$ , compared with the previous SOTA method SCDL.
192
+
193
+ Nested Distantly Labeled NER Task. For the nested ACE04 and ACE05, the span F1 values are listed in Table 2. Since the outstanding performance of the teacher-student framework (BOND) and the ensemble learning (RoSTER and SCDL) in the flat case, we implement two strong baselines for a fair comparison, which are Tea-Stu (span-based) and Ensemble (span-based), respectively. We conclude the nested results with two aspects. (1) Compared to KB matching, our model achieves higher F1 scores by significant margins, showing
194
+
195
+ Table 2: The main results in the nested DS-NER task, via span F1 scores. We run all baselines using the span-based schema. The value of KB Matching is the result of manually-labeled noisy data in the training set. The best results are marked in bold.
196
+
197
+ <table><tr><td></td><td>LOC</td><td>ORG</td><td>PER</td><td>MISC</td><td>ALL</td></tr><tr><td>RoSTER</td><td>0.923</td><td>0.839</td><td>0.942</td><td>0.528(0.861/0.380)</td><td>0.862</td></tr><tr><td>SCDL</td><td>0.817</td><td>0.803</td><td>0.913</td><td>0.609(0.802/0.491)</td><td>0.817</td></tr><tr><td>Ours</td><td>0.877</td><td>0.885</td><td>0.920</td><td>0.673(0.744/0.615)</td><td>0.864</td></tr></table>
198
+
199
+ Table 3: Class-level performance comparison with strong baselines on CoNLL03 training set, via span F1 (Precision / Recall).
200
+
201
+ that our model is effective at handling noisy data in the nested NER task. (2) Consistent with the flat case, our model still achieves the best results among these self-training methods.
202
+
203
+ # 5.3 Denoising Performance Analysis
204
+
205
+ Based on the prediction and ground-truth label (not distant label) in the CoNLL03 training set, we discuss the denoising performance at the class level, compared to the strong baselines RoSTER and SCDL.
206
+
207
+ More Consistency with Flexible Threshold. In general (ALL in Table 3), the generated pseudo label in our model is more accurate, which is strongly related to the robustness under noisy data interference. Among different classes (LOC, ORG, PER, MISC in Table 3), our model shows more consistent performance, especially in the entity class MICS. The reason is that the class-wise flexible threshold considers the different learning abilities compared to the baselines, and pays more attention to other classes besides the high-performance class.
208
+
209
+ Better Exploration with Hybrid Pseudo Label. The low-performance class MISC (MISC in Table 3) shows a significantly higher recall in our model,
210
+
211
+ ![](images/ab05e44e62076a11a26252cbb8c792abcc940c5abcbe046480b5c717dd7ea19d.jpg)
212
+
213
+ ![](images/8f7a9961018c05549d853191ace562878829e68d60962df7e254cf1c67460873.jpg)
214
+ Figure 3: The representation visualization of entities on CoNLL03 testing set. The left subgraph represents strong baseline SCDL, and the right is our model. The markers with different colors represent different entity classes. And the yellow markers represent the wrong predictions of entity class ORG (ORG Wrong Pred.). The green and red circles are list in the left subgraph.
215
+
216
+ implying that the special design of hybrid pseudo label improves the feature exploration of the low-performance class, by addressing the bias in label generation. In addition, our model further improves the performance of low-performance class MISC, proving that our model largely alleviates the performance degradation in the low-performance class.
217
+
218
+ # 5.4 Improvement from Hybrid Pseudo Label
219
+
220
+ We discuss the effects of the hybrid pseudo label with representation visualization, compared to the strong baseline SCDL. All entities are visualized in Figure 3, where different colors represent different entity classes. We take entity class ORG as an example to highlight its wrong predictions, where the wrong predictions have the ground true label ORG but are classified into other types.
221
+
222
+ Strong Classification Ability. Considering the highlight of green circles in Figure 3, the yellow markers (wrong predictions) in strong baseline SCDL are more widely distributed among different groups than in our model. Unlike SCDL, which only uses the prediction of the model itself, we additionally integrate the knowledge in the distantly-labeled entity into self-training. Since the distantly-labeled entities come from the entity-
223
+
224
+ ![](images/7e61ebbe7440beeb64f03ddc1a7fbb68f9fa5e07cdee33c264e1f1483be47ac3.jpg)
225
+
226
+ ![](images/578bebc687782c0e44ee0517e32d154292f630eb54d639c71983686d614ff3e3.jpg)
227
+ (a)
228
+
229
+ ![](images/c085cd4d39d13a9aca4d2afd2717b82f53b8813850238cc7d0656ca88726d350.jpg)
230
+ (b)
231
+ Figure 4: The class-wise analysis on CoNLL03 testing set. The horizontal axis represents the training steps. (4a) The vertical axis represents the span F1 scores. The upper subgraph denotes our model w/o CFT, and the lower subgraph denotes our model. (4b) The vertical axis represents the class-wise threshold based on our model.
232
+
233
+ related knowledge bases, the distantly-labeled data contains abundant entity-related semantic information, which provides additional information for entity classification in self-training.
234
+
235
+ Clear Separation between Entity Classes. Considering the highlight of the red circle in Figure 3, markers of different entity classes (red, orange, and blue) are mixed, indicating that entity classes with similar semantics are wrongly clustered. This is presumably because the bias of the pseudo label further expands in self-training when the model is updated iteratively under the guide of this pseudo label. However, injecting the distant label, our model alleviates this bias with the semantic information of the distantly-labeled entities, and is better at identifying the difference between similar entity classes.
236
+
237
+ # 5.5 Boosting from Flexible Threshold
238
+
239
+ The effect of the Class-wise Flexible Threshold (CFT) is finely analyzed through the look into the training process. The F1 scores against the training iterations of each entity class are shown in Figure 4. And we mainly focus on the entity class MISC (represented by the red line), which contains com
240
+
241
+ ![](images/bbc2a4b9729c191db2b1840fcce6ef7a3a537b6182bbe01052e858434c66ee9d.jpg)
242
+ Figure 5: The class-level performance in the nested ACE05 testing set. The left subgraph is the span F1 score in all classes, and the right is the class-wise absolute value between precision and recall.
243
+
244
+ plex semantics (Tong et al., 2021) and shows low performance. The detailed characteristics of the training process are provided in Appendix A.
245
+
246
+ Effectiveness of Warm-up Process. Unlike the counterpart (w/o CFT), our model can quickly identify the low-performance class MISC in the early stage. We infer that the warm-up strategy in the flexible threshold design allows the candidate with low confidence to be selected in the early stage.
247
+
248
+ Attention for Complex Class. With the training progress, the line of the complex class MISC (Tong et al., 2021) in our model (the upper subgraph) is constantly rising until it reaches a steady state, but the model without CFT reaches a plateau prematurely. Therefore, our model effectively captures the complex feature of the class MISC. Besides, the increased capability for recognizing the class MISC happens at the late stage of model training. We conjecture this is due to the memorization mechanism of deep networks, that they would first memorize simple patterns than complex patterns (Arpit et al., 2017). And our model can fully adapt to the memorization mechanism, as the class-wise flexible threshold is dynamically adjusted according to the variant learning ability of the complex class during training.
249
+
250
+ # 5.6 Nested Case Study
251
+
252
+ Our work extends the DS-NER tasks to the nested case, and more detailed experimental results will be provided in the following part.
253
+
254
+ Class-Balancing Performance. We focus on the nested benchmark ACE05, and analyze the class-level performance with the strong baseline Ensemble. Totally, the class-level performance in the nested case agrees with that in the flat case. First, our model has improved significantly for the classes with low performance (Class 4, 5, and 6),
255
+
256
+ <table><tr><td>Statistics in training data</td><td>0.437</td><td>0.569</td><td>0.708</td></tr><tr><td>Predictions in testing data</td><td>0.774</td><td>0.808</td><td>0.822</td></tr></table>
257
+
258
+ Table 4: The model performance with different noise levels, via the span F1 score.
259
+
260
+ <table><tr><td></td><td>CoNLL03</td><td>Wikigold</td><td>Tweet</td><td>ACE04</td></tr><tr><td>Our model</td><td>0.854</td><td>0.679</td><td>0.538</td><td>0.831 151†</td></tr><tr><td>Const.Thresh.(CT)</td><td>0.817</td><td>0.593</td><td>0.526</td><td>0.830 226†</td></tr><tr><td>LinearThresh. (LT)</td><td>0.826</td><td>0.565</td><td>0.537</td><td>0.829 191†</td></tr><tr><td>Const.Weight. (CW)</td><td>0.808</td><td>0.579</td><td>0.529</td><td>0.801</td></tr><tr><td>DataAug. (DA)</td><td>0.841</td><td>0.535</td><td>0.532</td><td>0.819</td></tr></table>
261
+
262
+ Table 5: The ablation study. The values marked with $^\dagger$ denote the number of training epochs when the model reaches the optimal state, and other values denote the span F1 scores.
263
+
264
+ as shown in the left subgraph of Figure 5, which exhibits more consistent performance among all classes. Second, our model tackles the large gap (between precision and recall) in the above classes compared to the Ensemble baseline, as observed in the right subgraph of Figure 5. And these two conclusions prove the validity of candidate selection and label generation in CLIM.
265
+
266
+ Robustness in Different Noise Levels. As mentioned in Appendix C, the distant label generation for the nested dataset is related to the statistics of CoNLL03. We then extend the distant label generation with the statistics of different flat benchmarks, including Wikigold and Twitter, and investigate the performance on different noise levels of the training set. As shown in Table 4, our model exhibits robustness towards varying degrees of noise.
267
+
268
+ # 5.7 Ablation Study
269
+
270
+ As shown in Table 5, we implement an exclusive ablation study to validate the effectiveness of each component, including the following aspects: (1) replacing the flexible threshold (Section 4.1) with the constant threshold (CT) and linearly-increased threshold (LT); (2) replacing the dynamic weighting of the pseudo label and distant label (Eq. 6) with the constant weighting (CW); (3) replacing the random masking in the attention matrix with the random masking in token input for data augmentation (DA).
271
+
272
+ Compared with different benchmarks, the nested case shows a more robust performance than the flat case. The flexible threshold strategy significantly accelerate the convergence speed in the nested case, as our model takes around 50 fewer training epochs
273
+
274
+ to converge than its counterpart (CT and LT).
275
+
276
+ When each component is removed separately, the model shows different degrees of performance degradation, indicating the effectiveness of different components. We summarize the following aspects. (1) Compared to the constant threshold (CT), linearly-increased threshold (LT) shows higher performance, except for Wikigold. Although linearly-increased threshold can imitate the growth process of model learning ability, the class-level mismatch between the learning ability and threshold may deteriorate the performance. (2) The simple random masking, referred to the pre-training strategies in the pre-trained language model (Vaswani et al., 2017), shows the best performance. More advanced data augmentation strategies could be explored and applied in our framework, which is not in the scope of this paper. Further, we take a comprehensive parameter study in Appendix B.
277
+
278
+ # 6 Conclusion
279
+
280
+ This work advances the class-rebalancing self-training in the distantly-supervised named entity recognition. With the class-wise flexible threshold and the fine-grained hybrid pseudo label in self-training, our work tackles the dominance from the high-performance class and the degradation in the low-performance class. On this basis, the experiments show state-of-the-art results on seven benchmarks. And the comprehensive analysis further proves the more consistent performance in class-level learning and the stronger semantics classification ability. Our work, especially the advanced designs in self-training, positively impacts robust learning with noisy data. It provides a classrebalancing method to explore the semantic information in distantly-labeled data.
281
+
282
+ # Limitations
283
+
284
+ In the augmentation-driven self-training, we implement the data augmentation with random masking for simplicity, since augmentation is not the focus of this work. And Wang and Henao (2021) has explored more fine-grained data augmentation strategies, which may further improve performance.
285
+
286
+ # Acknowledgments
287
+
288
+ We would like to thank the anonymous reviewers for their insightful comments and constructive suggestions. This research is supported by the National Key Research and Development Program
289
+
290
+ of China (Grant No. 2020YFB1707803) and the Fundamental Research Funds for the Central Universities (Grant No. 226-2022-00051).
291
+
292
+ # References
293
+
294
+ Devansh Arpit, Stanisław Jastrzebski, Nicolas Ballas, David Krueger, Emmanuel Bengio, Maxinder S. Kanwal, Tegan Maharaj, Asja Fischer, Aaron Courville, Yoshua Bengio, and Simon Lacoste-Julien. 2017. A closer look at memorization in deep networks. In Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pages 233-242. PMLR.
295
+ Dominic Balasuriya, Nicky Ringland, Joel Nothman, Tara Murphy, and James R. Curran. 2009. Named entity recognition in Wikipedia. In Proceedings of the 2009 Workshop on The People's Web Meets NLP: Collaboratively Constructed Semantic Resources (People's Web), pages 10-18, Suntec, Singapore. Association for Computational Linguistics.
296
+ Yixin Cao, Zikun Hu, Tat-seng Chua, Zhiyuan Liu, and Heng Ji. 2019. Low-resource name tagging learned with weakly labeled data. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 261-270, Hong Kong, China. Association for Computational Linguistics.
297
+ Yiming Chen, Yan Zhang, Chen Zhang, Grandee Lee, Ran Cheng, and Haizhou Li. 2021. Revisiting self-training for few-shot learning of language model. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 9125-9135, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
298
+ George R Doddington, Alexis Mitchell, Mark A Przybocki, Lance A Ramshaw, Stephanie M Strassel, and Ralph M Weischedel. 2004. The automatic content extraction (ace) program-tasks, data, and evaluation. In Lrec, volume 2, pages 837-840. Lisbon.
299
+ Tianyu Gao, Xingcheng Yao, and Danqi Chen. 2021. SimCSE: Simple contrastive learning of sentence embeddings. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 6894-6910, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
300
+ Frédéric Godin, Baptist Vandersmissen, Wesley De Neve, and Rik Van de Walle. 2015. Multimedia lab @ ACL WNUT NER shared task: Named entity recognition for Twitter microposts using distributed word representations. In Proceedings of the Workshop on Noisy User-generated Text, pages 146-153, Beijing, China. Association for Computational Linguistics.
301
+
302
+ Geoffrey E. Hinton, Oriol Vinyals, and Jeffrey Dean. 2015. Distilling the knowledge in a neural network. ArXiv, abs/1503.02531.
303
+ Zhanming Jie, Pengjun Xie, Wei Lu, Ruixue Ding, and Linlin Li. 2019. Better modeling of incomplete annotations for named entity recognition. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 729-734, Minneapolis, Minnesota. Association for Computational Linguistics.
304
+ Yangming Li, lemao liu, and Shuming Shi. 2021. Empirical analysis of unlabeled entity problem in named entity recognition. In International Conference on Learning Representations.
305
+ Chen Liang, Yue Yu, Haoming Jiang, Siawpeng Er, Ruijia Wang, Tuo Zhao, and Chao Zhang. 2020. Bond: Bert-assisted open-domain named entity recognition with distant supervision. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery &amp; Data Mining, KDD '20, page 1054-1064, New York, NY, USA. Association for Computing Machinery.
306
+ Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized BERT pretraining approach. CoRR, abs/1907.11692.
307
+ Yu Meng, Yunyi Zhang, Jiaxin Huang, Xuan Wang, Yu Zhang, Heng Ji, and Jiawei Han. 2021. Distantly-supervised named entity recognition with noise-robust learning and language model augmented self-training. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 10367-10378, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
308
+ Minlong Peng, Xiaoyu Xing, Qi Zhang, Jinlan Fu, and Xuanjing Huang. 2019. Distantly supervised named entity recognition using positive-unlabeled learning. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2409-2419, Florence, Italy. Association for Computational Linguistics.
309
+ Lev Ratinov and Dan Roth. 2009. Design challenges and misconceptions in named entity recognition. In Proceedings of the Thirteenth Conference on Computational Natural Language Learning (CoNLL-2009), pages 147-155, Boulder, Colorado. Association for Computational Linguistics.
310
+ Jingbo Shang, Liyuan Liu, Xiaotao Gu, Xiang Ren, Teng Ren, and Jiawei Han. 2018. Learning named entity tagger using domain-specific dictionary. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2054-2064, Brussels, Belgium. Association for Computational Linguistics.
311
+
312
+ Kihyuk Sohn, David Berthelot, Nicholas Carlini, Zizhao Zhang, Han Zhang, Colin A Raffel, Ekin Dogus Cubuk, Alexey Kurakin, and Chun-Liang Li. 2020. Fixmatch: Simplifying semi-supervised learning with consistency and confidence. In Advances in Neural Information Processing Systems, volume 33, pages 596-608. Curran Associates, Inc.
313
+ Erik F. Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition. In Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003, pages 142-147.
314
+ Meihan Tong, Shuai Wang, Bin Xu, Yixin Cao, Minghui Liu, Lei Hou, and Juanzi Li. 2021. Learning from miscellaneous other-class words for few-shot named entity recognition. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 6236-6247, Online. Association for Computational Linguistics.
315
+ Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc.
316
+ Christopher Walker, Stephanie Strassel, Julie Medero, and Kazuaki Maeda. 2006. Ace 2005 multilingual training corpus. Linguistic Data Consortium, Philadelphia, 57:45.
317
+ Rui Wang and Ricardo Henao. 2021. Unsupervised paraphrasing consistency training for low resource named entity recognition. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 5303-5308, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
318
+ Chen Wei, Kihyuk Sohn, Clayton Mellina, Alan Yuille, and Fan Yang. 2021. Crest: A class-rebalancing self-training framework for imbalanced semi-supervised learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 10857-10866.
319
+ Ralph Weischedel, Martha Palmer, Mitchell Marcus, Edward Hovy, Sameer Pradhan, Lance Ramshaw, Nianwen Xue, Ann Taylor, Jeff Kaufman, Michelle Franchini, et al. 2013. Ontonotes release 5.0 ldc2013t19. Linguistic Data Consortium, Philadelphia, PA, 23.
320
+ Qizhe Xie, Zihang Dai, Eduard Hovy, Thang Luong, and Quoc Le. 2020. Unsupervised data augmentation for consistency training. In Advances in Neural Information Processing Systems, volume 33, pages 6256-6268. Curran Associates, Inc.
321
+ Xingrui Yu, Bo Han, Jiangchao Yao, Gang Niu, Ivor Tsang, and Masashi Sugiyama. 2019. How does
322
+
323
+ disagreement help generalization against label corruption? In Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, pages 7164-7173. PMLR.
324
+
325
+ Bowen Zhang, Yidong Wang, Wenxin Hou, HAO WU, Jindong Wang, Manabu Okumura, and Takahiro Shinozaki. 2021a. Flexmatch: Boosting semi-supervised learning with curriculum pseudo labeling. In Advances in Neural Information Processing Systems, volume 34, pages 18408-18419. Curran Associates, Inc.
326
+
327
+ Wenkai Zhang, Hongyu Lin, Xianpei Han, and Le Sun. 2021b. De-biasing distantly supervised named entity recognition via causal intervention. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 4803-4813, Online. Association for Computational Linguistics.
328
+
329
+ Xinghua Zhang, Bowen Yu, Tingwen Liu, Zhenyu Zhang, Jiawei Sheng, Xue Mengge, and Hongbo Xu. 2021c. Improving distantly-supervised named entity recognition with self-collaborative denoising learning. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 10746-10757, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
330
+
331
+ Kang Zhou, Yuepei Li, and Qi Li. 2022. Distantly supervised named entity recognition via confidence-based multi-class positive and unlabeled learning. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 7198-7211, Dublin, Ireland. Association for Computational Linguistics.
332
+
333
+ # A Training Process Analysis
334
+
335
+ We investigate the characteristic of the whole training process, with four representative observational variables in Figure 6. As the training loss decreases, the span F1 scores have experienced a significant fluctuation, mainly due to the rapid change between the number of predicted spans for non-entity class $\epsilon$ and entity classes in $\mathcal{E}$ . We infer that the enhanced ability to recognize non-span entities induces this change, including the representation reconstruction for each entity class. After the model performance of identifying non-entity spans reaches a steady state, the number of the predicted spans for entity classes in $\mathcal{E}$ steadily increase.
336
+
337
+ In addition, the extreme imbalance of the entity classes can be seen intuitively by comparing the number of predicted spans for non-entity class $\epsilon$ (around 1250) and entity classes in $\mathcal{E}$ (around 25).
338
+
339
+ ![](images/4bfb218ca38bf84bda3ee83958235ef9ad60f3b63af2a9d487d6539115515f0b.jpg)
340
+ Figure 6: The training process analysis with four observational variables on CoNLL03. The horizontal axis represents the training steps, and the vertical axis represents different meanings with four subgraphs: (a) the total training loss $\mathcal{L}_t$ ; (b) the number of predicted spans for non-entity class $\epsilon$ ; (c) the number of predicted spans for entity classes in $\mathcal{E}$ ; (d) the span F1 scores in the development set.
341
+
342
+ Thus the design of class-wise thresholds is vital to alleviate the class imbalance problem.
343
+
344
+ # B Parameter Study
345
+
346
+ # B.1 Upper Bond Threshold
347
+
348
+ ![](images/cd39c92a4f607083410c415e48328633d6c68190eda2402cfcf59469cf009128.jpg)
349
+ Figure 7: The parameter study of upper bond threshold on three benchmarks. The horizontal axis represents the values of the thresholds $\tau$ , and the vertical axis represents span F1 scores.
350
+
351
+ We investigate the effects of the threshold upper bond $\tau$ in Figure 7. In general, all three datasets achieve high-performance results around the threshold upper bond of 0.9. With higher values of the threshold, the amount of the predicted non-entity spans decreases, so the model training at the early stage concentrates more on entity classes in $\mathcal{E}$ . In CoNLL03 and ACE04, the optimal results are achieved at high thresholds, which suggests that reducing the number of non-entity spans at the early stage helps the feature extraction of entity classes to some extent. However, the Tweet dataset obtains comparable performance with small thresholds. We assume this is because the Tweet dataset
352
+
353
+ is inherently noisier. With a small threshold, the noise in pseudo labels is too heavy for the model to remember, making the model get a comparable performance by chance.
354
+
355
+ # B.2 Masking Probability
356
+
357
+ <table><tr><td>Weak
358
+ Strong</td><td>0.05</td><td>0.10</td><td>0.20</td></tr><tr><td>0.05</td><td>0.613</td><td>0.605</td><td>0.561</td></tr><tr><td>0.10</td><td>0.611</td><td>0.596</td><td>0.592</td></tr><tr><td>0.20</td><td>0.679</td><td>0.643</td><td>0.596</td></tr></table>
359
+
360
+ Table 6: The parameter study of masking probability on Wikigold, via the span F1 scores. The first row represents the maximum masking probability in the weak augmentation, and the first column represents the maximum masking probability in the strong augmentation.
361
+
362
+ We study the masking probability in Wikigold. Based on the experimental results in Table 6, we summarize that the combination of the weak augmentation with low masking probability and strong augmentation with high masking probability shows high performance. As in Table 6, the lower left cases (the values of 0.679, 0.643, 0.611) show high performance. And these results agree with the intuition. Since the weak augmentation with low masking probability explores more useful information about the input sentence, the pseudo label generated from the weakly-augmented data is more confident than the strong augmentation.
363
+
364
+ # B.3 Dynamic Weighting
365
+
366
+ We explore three different designs of dynamic weighting, the results are shown in Figure 8a. The visualization of these mappings, from the training phase $\hat{t}$ to the distant label weights $\lambda_{y}$ and the pseudo label weights $\lambda_{p}$ , is provided in Figure 8b. And the definition of these mappings is shown as follows:
367
+
368
+ Case 1 $\left\{ \begin{array}{ll}\lambda_{y} = \hat{t},\\ \lambda_{p} = 1 - \hat{t} \end{array} \right.$
369
+ Case 2 $\begin{cases} \lambda_y = (\sin (0.5\cdot \pi (\hat{t} -1)))^2\\ \lambda_p = (\cos (0.5\cdot \pi (\hat{t} +1)))^2 \end{cases}$
370
+ Case 3 $\begin{cases} \lambda_y = \left(\cos \left(0.5\cdot \pi (\hat{t} +1)\right) + 1\right)^2\\ \lambda_p = \left(\sin \left(0.5\cdot \pi (\hat{t} -1)\right) + 1\right)^2 \end{cases}$
371
+
372
+ where $\hat{t} = t / t_{total}\in [0,1],t_{total}$ is the total training steps. And Case 3 is used in our work.
373
+
374
+ We design the above three mappings with the following consideration. (1) A general idea is to decrease the distant label weights and increase the pseudo label weights, with ongoing training. (2) Before the model obtains useful features for entity classes, the training mainly focuses on the distant label, thus slowing down the weight growth of the pseudo label. (3) And also, we accelerate the decline of distant label weights to avoid model overfitting.
375
+
376
+ ![](images/60c37d2507d41816d9cb353328b62a803e894b96b748126b7b8bb29c0ff70b1a.jpg)
377
+ (a)
378
+
379
+ ![](images/36b616642d87217e42435ce2a54e893cdbaf33d500b8c2d1579b349f273157c8.jpg)
380
+
381
+ ![](images/1762e06300e849a166486390414d8dcd905aa92261486e57ad3b4a628b22e373.jpg)
382
+ (b)
383
+ Figure 8: (8a) The ablation study of the dynamic weighting on CoNLL03, via the span F1 score and training epoch; (8b) The visualization of the different mappings. The X-axis represents the training phase and the Y-axis represents the distant/pseudo label weights.
384
+
385
+ ![](images/3c766db76bf5652d72a47b38e67e8430dd7a2ca85d2044a3369ed592600f2a36.jpg)
386
+
387
+ As seen from the results in Figure 8, there is a positive correlation between the more delicate design mapping and the higher model performance, including the span F1 scores and the convergence.
388
+
389
+ # C Distant Label Generation in Nested Case
390
+
391
+ Though many works focus on distantly-supervised NER of the flat case, the study for the nested case is rare. Like the fully supervised NER task, recognizing the nested named entity is also essential for the downstream application. Hence, we extend the distantly-supervised NER with the nested case.
392
+
393
+ The span-based schema is to make a prediction on the entity level, and has shown high performance in the flat case. And we prove that our framework could further improve the ability to uncover the unlabeled entity and mislabeled entity in the nested case.
394
+
395
+ Distant Label generation with external knowledge bases is time-consuming, considering the collection of external dictionaries and the design of matching rules. In this work, we attempt to construct the noisy nested dataset by artificially adding
396
+
397
+ <table><tr><td>Dataset</td><td>CoNLL03</td><td>OntoNotes5.0</td><td>Tweet</td><td>Webpage</td><td>Wikigold</td><td>ACE2004</td><td>ACE2005</td></tr><tr><td>Learning Rate</td><td>3e-6</td><td>3e-6</td><td>3e-5</td><td>3e-5</td><td>3e-6</td><td>3e-6</td><td>3e-6</td></tr><tr><td>Max. Len. Span</td><td>9</td><td>9</td><td>9</td><td>9</td><td>9</td><td>12</td><td>12</td></tr><tr><td>Train Epoch</td><td>40</td><td>15</td><td>200</td><td>300</td><td>250</td><td>250</td><td>200</td></tr></table>
398
+
399
+ noise to ground-truth labels, which includes the following steps: (1) define the noisy type of named entity based on the ground-truth labels; (2) calculate the frequency of different noisy cases in a dataset; (3) generate the noisy labels according to the statistical results.
400
+
401
+ ![](images/62ec49530fa1384c1b0a965e1f1e3ed3c80b34b45fb58ad4b4a245087e73503c.jpg)
402
+ Figure 9: (9a) Illustrate three types of the predefined noise; (9b) The statistic of the correctly/incorrectly labeled entity, and the more detailed statistic of the different noisy types in the incorrectly labeled case; (9c) The statistic of the type error, where the values represent the probability that one entity mislabels from true type (in raw) to wrong type (in column).
403
+
404
+ The incorrect annotations consist of missing, boundary, and type errors. Missing error means that an entity in the sentences (labeled in the ground-truth training set) is not identified during the rule-based matching process. Boundary error refers to the entities of the incorrectly labeled boundary and the correctly labeled type, and type error refers to the entities of the correctly labeled type and the incorrectly labeled boundary.
405
+
406
+ Taking CoNLL03 as an example, we statistic the incorrectly labeled entities in the ground-truth training set. Three predefined noisy types have already covered all incorrectly-labeled entities, as shown in Figure 9b. In addition, there are more incorrectly labeled cases of type error, when the semantic similarity between entity classes is relatively large, as shown in Table 9c. Then we generate the noisy
407
+
408
+ label for the ACE04 and ACE05 datasets, with the statistics in Figure 9b and 9c.
409
+
410
+ # D Dataset Statistics
411
+
412
+ Table 7: Hyper-parameter settings in the DS-NER task. Learning Rate represents the initial learning rate with a cosine learning rate decay schedule; Max. Len. Span represents the maximum length of the candidate spans; Train Epoch represents the maximum epochs in the training process.
413
+
414
+ <table><tr><td>Dataset</td><td># types</td><td># samples</td><td># entities</td><td># nested entities</td></tr><tr><td>CoNLL03</td><td>4</td><td>14041</td><td>17781</td><td>-</td></tr><tr><td>ON5.0</td><td>18</td><td>115812</td><td>125366</td><td>-</td></tr><tr><td>Tweet</td><td>10</td><td>2393</td><td>994</td><td>-</td></tr><tr><td>Webpage</td><td>4</td><td>385</td><td>393</td><td>-</td></tr><tr><td>Wikigold</td><td>4</td><td>1142</td><td>2282</td><td>-</td></tr><tr><td>ACE2004</td><td>7</td><td>6200</td><td>15745</td><td>3355</td></tr><tr><td>ACE2005</td><td>7</td><td>7292</td><td>17695</td><td>3438</td></tr></table>
415
+
416
+ Table 8: Statistics in the distantly-labeled training set. # types: the number of the pre-defined entity classes; # samples: the number of the training samples; # entities: the number of the distantly-labeled entities; # nested entities: the number of the distantly-labeled nested entities.
417
+
418
+ # E Hyper-parameter and Baseline Setting
419
+
420
+ Detailed hyper-parameter settings for each dataset are shown in Table 7. Among then, we mainly fine-tune the parameters of the initial learning rate and training epoch, where the initial learning rate is chosen from $\{3\mathrm{e} - 5,3\mathrm{e} - 6\}$ , training epoch is chosen from $\{15,30,40,50,200,250,300\}$ . The rest of the parameters are default in huggingface Transformers. We conduct the experiments on NVIDIA Tesla V100 GPU.
421
+
422
+ The baselines in the nested case are all implemented with the span-based schema. The average predictions of 2 ensemble models are used for the baseline Ensemble in Table 2.
423
+
424
+ A For every submission:
425
+
426
+ A1. Did you describe the limitations of your work?
427
+
428
+ In Limitations Section
429
+
430
+ A2. Did you discuss any potential risks of your work?
431
+
432
+ No risks.
433
+
434
+ A3. Do the abstract and introduction summarize the paper's main claims?
435
+
436
+ Left blank.
437
+
438
+ A4. Have you used AI writing assistants when working on this paper?
439
+
440
+ Left blank.
441
+
442
+ B Did you use or create scientific artifacts?
443
+
444
+ Left blank.
445
+
446
+ B1. Did you cite the creators of artifacts you used?
447
+
448
+ Left blank.
449
+
450
+ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
451
+
452
+ The data used in our work is open source.
453
+
454
+ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
455
+
456
+ The data used in our work is open source.
457
+
458
+ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
459
+
460
+ The data used in our work is open source.
461
+
462
+ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
463
+
464
+ The data used in our work is open source.
465
+
466
+ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
467
+
468
+ In Experiment Section.
469
+
470
+ C Did you run computational experiments?
471
+
472
+ In Experiment Section.
473
+
474
+ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used?
475
+
476
+ In Experiment Section.
477
+
478
+ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? In Experiment Section.
479
+ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? In Experiment Section.
480
+ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? In Experiment Section.
481
+
482
+ D Did you use human annotators (e.g., crowdworkers) or research with human participants? Left blank.
483
+
484
+ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response.
485
+ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? No response.
486
+ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response.
487
+ D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response.
488
+ D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response.
2023/A Class-Rebalancing Self-Training Framework for Distantly-Supervised Named Entity Recognition/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ede95d9610ae933ba52a6fc63777dbec06f0619ff6478c354241ea104c475078
3
+ size 556981
2023/A Class-Rebalancing Self-Training Framework for Distantly-Supervised Named Entity Recognition/layout.json ADDED
The diff for this file is too large to render. See raw diff
 
2023/A Comparative Analysis of the Effectiveness of Rare Tokens on Creative Expression using ramBERT/14b40f42-41c5-437e-b7fd-c53ddc62efec_content_list.json ADDED
The diff for this file is too large to render. See raw diff
 
2023/A Comparative Analysis of the Effectiveness of Rare Tokens on Creative Expression using ramBERT/14b40f42-41c5-437e-b7fd-c53ddc62efec_model.json ADDED
The diff for this file is too large to render. See raw diff
 
2023/A Comparative Analysis of the Effectiveness of Rare Tokens on Creative Expression using ramBERT/14b40f42-41c5-437e-b7fd-c53ddc62efec_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:50c315f51d96a2b4c7c727cfee7900b6527029ea01c11cbdf848e7049d72da14
3
+ size 416375
2023/A Comparative Analysis of the Effectiveness of Rare Tokens on Creative Expression using ramBERT/full.md ADDED
@@ -0,0 +1,401 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # A Comparative Analysis of the Effectiveness of Rare Tokens on Creative Expression using ramBERT
2
+
3
+ Youbin Lee, Deokgi Kim, Byung-Won On
4
+
5
+ Kunsan National University
6
+
7
+ {hanbin0694, thekey1220, bwon}@kunsan.ac.kr
8
+
9
+ Ingyu Lee
10
+
11
+ Yeungnam University
12
+
13
+ inlee3941@gmail.com
14
+
15
+ # Abstract
16
+
17
+ Until now, few studies have been explored on Automated Creative Essay Scoring (ACES), in which a pre-trained model automatically labels an essay as a creative or a non-creative. Since the creativity evaluation of essays is very subjective, each evaluator often has his or her own criteria for creativity. For this reason, quantifying creativity in essays is very challenging. In this work, as one of preliminary studies in developing a novel model for ACES, we deeply investigate the correlation between creative essays and expressiveness. Specifically, we explore how rare tokens affect the evaluation of creativity for essays. For such a journey, we present five distinct methods to extract rare tokens, and conduct a comparative study on the correlation between rare tokens and creative essay evaluation results using BERT. Our experimental results showed clear correlation between rare tokens and creative essays. In all test sets, accuracies of our rare token masking-based BERT (ramBERT) model were improved over the existing BERT model up to $14\%$ .
18
+
19
+ # 1 Introduction
20
+
21
+ In the era of the Fourth Industrial Revolution, new knowledge creation based on existing knowledge is becoming more important than any other field. With the advent of Artificial Intelligence (AI) technology, which is the core of this industrial revolution, AI takes charge of simple repetitive tasks and allow humans to focus more on creative activities. So that, new innovations can be achieved through collaboration between humans and AI. This knowledge creation is based on human creative thinking (Noh and Kim, 2008)
22
+
23
+ One of the representative creative thinking activities is writing. Creative texts are new and at the same time communicable to readers within social and cultural contexts. Creative writing is the act of writing creative texts. Specifically, it is defined as a writing activity in which writers express their
24
+
25
+ new and original ideas so that they can be communicated appropriately and effectively in social and cultural contexts. In this sense, through creative writing, middle and high school students can develop their ability for creativity that is essential for the Fourth Industrial Revolution.
26
+
27
+ Creativity is a human mental activity, known as a complex characteristic that is difficult to explain. In addition, it is not limited to one academic field, but the nature of creativity varies slightly depending on the disciplines in question, such as linguistics, psychology, and literature. Academically, there is no clear definition of creativity yet.
28
+
29
+ Looking at the discussion so far, it has been defined differently depending on which part of creativity the researcher has focused on. For example, (Torrance, 1974) defined creativity as "the process of becoming sensitive to problems, flaws, gaps in knowledge, missing elements, and incongruities", (Sternberg and Lubart, 1999) as "generating new and useful ideas," and (Plucker et al., 2004) as "the interaction between processes and capacities that produce novel and useful outputs within a specific social context." To date, the most widely accepted definition of creativity by many researchers is "an individual's ability to create something new and appropriate."
30
+
31
+ Table 1 summarizes the creativity evaluation metric used to evaluate actual creative writing. Creativity in writing is largely divided into creativity in process and creativity in outcome. The former refers to the cognitive process leading to text production, and the latter refers to creativity evaluation through the resulting text. Since computational creativity mainly focuses on the latter rather than the former, we also focus on the latter in this work.
32
+
33
+ As shown in the table, the creativity in outcome is divided into creativities in expression and content. Because clearly evaluating the creativity in content is implausible with current technology, we focus now on assessing the creativity in expression while
34
+
35
+ <table><tr><td>Main category</td><td>Middle category</td><td>Subdivision</td><td>Creativity evaluation index</td></tr><tr><td rowspan="8">Creativity in Writing</td><td rowspan="4">Creativity in Process</td><td></td><td>Fluency</td></tr><tr><td></td><td>Flexibility</td></tr><tr><td></td><td>Originality</td></tr><tr><td></td><td>Elaboration</td></tr><tr><td rowspan="4">Creativity in Outcome</td><td>Creativity in Expression</td><td>Originality</td></tr><tr><td rowspan="3">Creativity in Content</td><td>Appropriateness</td></tr><tr><td>Originality</td></tr><tr><td>Appropriateness</td></tr></table>
36
+
37
+ Table 1: Creativity evaluation metric.
38
+
39
+ leaving the creativity in content as a topic for our future research.
40
+
41
+ The creativity in expression has two common creativity evaluation indices. One is originality and the other is appropriateness. Technically, the originality is the ability to produce new and unique expressions that are different from existing ones, and the appropriateness must include all the requirements of 'good writing', such as having to strictly follow the grammatical rules and, in any case, having to meet the requirements appropriate for the given context.
42
+
43
+ In order to quantify the appropriateness index, various Automated Essay Scoring (AES) models that take essays as input are actively developed in the field of natural language processing in recent years. We will introduce the state-of-the-art AES models in detail in Section 2. Please note that studying the appropriateness index is out of the scope of this work.
44
+
45
+ In this paper, we focus only on the originality index in the creativity in expression. Throughout this paper, the expression includes the tokens of subword, word, $n$ -gram, and span(phrase or sentence). Furthermore, to clearly quantify the originality index, as we already mentioned, we first consider it to be new and unique tokens within the given corpus, but the meaning of "new and unique" is ambiguous. In our second thought, we define it as rare tokens in the corpus.
46
+
47
+ The goal of our research is to deeply investigate the correlation between rare tokens and creativity assessment. In the corpus, rare tokens can be extracted through various methods such as Byte Pair Encoding (BPE), Inverse Document Frequency (IDF), Clustering in latent semantic spaces, SpanBERT, and existing rare word dictionaries including Stanford Rare Word (Luong et al., 2013), Cambridge Rare Word (Pilehvar et al., 2018), Contextual Rare Word (Khodak et al., 2018), and Definitional Nonce (Herbelot and Baroni, 2017).
48
+
49
+ In our framework, Bidirectional Encoder Representations from Transformers (BERT) is pre
50
+
51
+ trained with Masked Language Model (MLM). Unlike the existing BERT model, rare words rather than random words are masked and predicted in our model. Then, in the fine-tuning step, the pretrained BERT model learns with the training set of essays.
52
+
53
+ Our contributions are the followings:
54
+
55
+ - To the best of our knowledge, we are the first to deeply study the correlation between rare tokens and creativity assessment in the automated essay scoring problem.
56
+ - We present how to extract rare tokens in various approaches: BPE, IDF, Clustering in latent semantic spaces, SpanBERT, and existing rare word dictionaries such as Stanford Rare Word, Cambridge Rare Word, and Contextual Rare Word.
57
+ - We built and validated a training set including 800 creative essays with the help of linguistics experts from the Automated Student Assessment Prize (ASAP) dataset (ASAP, 2022). Our experimental results show that all accuracies of the pre-trained BERT model with rare tokens have been improved up to $14\%$ in assessing creativity in essays compared to the existing BERT model.
58
+
59
+ # 2 Related Work
60
+
61
+ The AES researches have been focused on generating hand-crafted features as an input of classification or regression (Larkey, 1998; Rudner and Liang, 2002; Attali and Burstein, 2006; Yannakoudakis et al., 2011; Chen and He, 2013; Phandi et al., 2015; Cozma et al., 2018). The linguistics features such as style and grammar are used (Lu et al., 2017; Ramalingam et al., 2018; Chen and He, 2013; Phandi et al., 2015). Sometimes, we analyze the contents by Latent semantic analysis (Ratna et al., 2007; Amalia et al., 2019; Ratna et al., 2018, 2019a,b; Shehab et al., 2018; Ratna et al., 2019c, 2015; Kakkonen et al., 2005; Xu et al., 2017), by WordNet (Al Awaida et al., 2019) and word embedding vectors (Dong and Zhang, 2016), by using specific language features (Wong and Bong, 2019; Cheon et al., 2015) and Artificial Neural Networks (Nguyen and Dery, 2016; Taghipour and Ng, 2016; Liang et al., 2018). We also hybrid the style and content analysis to improve further (Ishioka and Kameda, 2006; Peng et al., 2010; Imaki et al.,
62
+
63
+ 2013; Alghamdi et al., 2014; Jin and He, 2015; Al-Jouie and Azmi, 2017; Contreras et al., 2018).
64
+
65
+ With the advent of deep learning technologies, AES have improved by using the pre-trained models with large data set (Taghipour and Ng, 2016; Dong and Zhang, 2016; Dong et al., 2017; Wang et al., 2018; Tay et al., 2018; Farag et al., 2018; Song et al., 2020; Ridley et al., 2021; Muangkammuen and Fukumoto, 2020; Mathias et al., 2020; Jin et al., 2018; Dasgupta et al., 2018). The LSTM and RNN are naturally choices for AES task and some researchers applied BERT for the task. BERT based approaches (Uto et al., 2020; Rodriguez et al., 2019; Mayfield and Black, 2020) shows an inferior performance than LSTM (Dong et al., 2017; Tay et al., 2018) in general. However, Cao et al. (2020) and Yang et al. (2020) are knowns to show a compatible performance to LSTM based systems even with BERT. Other variations are Song et al. (2020) used a transfer learning to overcome the size limitation of training data, Wu et al. (2021) applied the R-Drop to avoid the overfitting, and Wang et al. (2022) used a transfer learning with multi-scale essay representations with BERT.
66
+
67
+ On the other hands, AES system has been studied as in a novelty or creativity detection perspective. Liang et al. (2021) proposed a model to detect creative essay using a Generative Adversarial Networks on the ASAP data. Doboli et al. (2020) used a cognitive inspired approach to detect novel ideas on short text. Chikkamath et al. (2020) applied the machine learning and deep learning approaches with various embedding vectors to find a new technology on patent data. Bhattarai et al (2020) proposed a Tsetline machine to detect a novel text using conjunctive clauses. Nandi and Basak (2020) proposed several CNN architectures to detect novel texts. Beaty and Johnson (2021) proposed an open platform to detect creativity based on semantic distances on word embeddings. Amplayo et al. (2019) evaluated the academic research paper novelty detection using time and impact simulations. Simpson et al. (2019) proposed Bayesian approach to predict humor and metaphor score using Gaussian process preference learning. Christophe et al. (2021; 2020) proposed a framework to detect a new topic by monitoring the geometrical properties of word embeddings. Ghosal et. al. proposed relative document vector based CNN model (Ghosal et al., 2018a) and a TAP-DLIND benchmark data sets (Ghosal et al., 2018b, 2022).
68
+
69
+ Many researches have been done in detecting creative essays as mentioned in this section. However, as authors aware, this is the first attempt to use the low frequency words to detect creative essays. Since the rare words are highly correlated with creative essays, we conjecture that the proposed approach will show a promising improvement.
70
+
71
+ # 3 Methodology
72
+
73
+ The ultimate goal of our study is to understand how strongly a pre-trained encoder model like BERT for ACES is affected by rare tokens.
74
+
75
+ To do this, given a large-sized set of text documents, we first extract a list of rare tokens using various approaches that we will discuss in detail in Section 3.1. Then, we will explain in detail our framework for comparative analysis of ACES in Section 3.2. Finally, in Section 3.3, we will design main questions for data-driven insights we want to know about through this study.
76
+
77
+ # 3.1 Extraction of Rare Tokens
78
+
79
+ Since our research focuses on creativity in expression, we consider various types of tokens as the expression. A token $t_i^*$ is one of subword $(t_i^s)$ , word $(t_i^w)$ , n-gram $(t_i^n)$ , and span $(t_i^S)$ that corresponds to a phrase, clause, or even sentence. In the case of $n$ -gram tokens, to distinguish them from spans, 2-grams are used in this study. This is, $t_i^* \in \{t_i^s, t_i^w, t_i^{n=2}, t_i^S\}$ . For example, in the pre-training step for BERT, when rare tokens are masked in a sentence like "we do not want to squander privileges and our essential things", $t_i^s$ are 'sq', '#uan', and '#der'; $t_i^w$ is 'squander'; $t_i^{n=2}$ is 'squander privileges'; and $t_i^S$ is 'squander privileges and our essential things'.
80
+
81
+ A set of rare tokens $\Phi_i = \{r_1, r_2, \dots, r_m\}$ is created through a method $f_i \in F = \{f_1, f_2, f_3, f_4, f_5\}$ from a corpus of large text documents $C = \{d_1, d_2, \dots, d_n\}$ used to pre-train the BERT model. For a comparative study, we create a total of 7 sets of rare tokens to see if there is a correlation between creative evaluation results and rare tokens. Those sets are $\Phi_{i=1 \sim 7}$ .
82
+
83
+ The first set of rare tokens is constructed as defined in Eq. 1.
84
+
85
+ $$
86
+ \Phi_ {1} = \left\{r _ {i} \mid r _ {i} = f _ {1} (x) \wedge r _ {i} = t _ {i} ^ {s} \right\} \tag {1}
87
+ $$
88
+
89
+ , where $x$ is a word token. $f_{1}$ is one of subword-based tokenizers. There are various to-
90
+
91
+ kenizers such as Byte-Pair Encoding (BPE), Word Piece Model (WPM), Unigram, and Sentence Piece Model (SPM), but we use BPE in this work. As the number of vocabularies increases in the pre-trained model, the dimension of word embedding vectors increases or the model becomes more complex. Therefore, units are used instead of vocabularies to reduce the number of vocabularies. A unit is a group of frequently appearing characters in a corpus and refers to a word or subword. A common word such as 'makers' and 'over' is set as one unit because it appears frequently in the corpus, while 'jet' is a rare word, so it is divided into 'j' and 'et' units. For example, in a sentence like "jet makers feud over seat width with big orders at stake", the units are $\{\mathrm{j},$ et, makers, fe, ud, over, seat, width, with, big, orders, at, stake\}.
92
+
93
+ In the initial time, function $f_{1}$ tokenizes a given corpus. For example, {('hug', 10), ('pug', 5), ('pun', 12), ('bun', 4), ('hugs', 5)}, where each parentheses has a token and its frequency. After splitting words into characters using a pre-defined dictionary such as ['b', 'g', 'h', 'n', 'p', 's', 'u'], we get the same result as {('h' 'u' 'g', 10), ('p' 'u' 'g', 5), ('p' 'u' 'n', 12), ('b' 'u' 'n', 4), ('h' 'u' 'g' 's', 5)}. In this result, the most frequently appearing character pairs are selected. For instance, the frequency of 'hu' is 15, while that of 'ug' is 20. Since 'ug' has the highest frequency, it is newly added to the dictionary - i.e., ['b', 'g', 'h', 'n', 'p', 's', 'u', 'ug']. This process is repeated until the number of times $i$ specified by the user. If $i = 3$ , the final dictionary includes units in ['b', 'g', 'h', 'n', 'p', 's', 'u', 'ug', 'un', 'hug']. Finally, we consider the top- $k$ units with the lowest frequency as rare tokens $t_{i}^{s}$ .
94
+
95
+ The second set of rare tokens is created as defined in Eq. 2.
96
+
97
+ $$
98
+ \Phi_ {2} = \left\{r _ {i} \mid r _ {i} = f _ {2} (x) \wedge r _ {i} = t _ {i} ^ {w} \right\} \tag {2}
99
+ $$
100
+
101
+ , where function $f_{2}$ decides if $x$ is a rare word or not. To implement $f_{2}$ , we use Inverse Document Frequency (IDF) that assigns a high score to a word that appears infrequently in the corpus, assuming it is an important word. For example, proper nouns such as 'biden' and 'google' have high values and stopwords such as 'in' and 'the' have low scores. For example, suppose that the number of documents in a corpus is one million and that of documents that contain 'biden' is one
102
+
103
+ ![](images/b1313625608229958022b19984d14445e50b5a41444aede8c6243dbc9fafc392.jpg)
104
+ Figure 1: Overall concept of clustering contextualized vectors in the semantic space.
105
+
106
+ thousand. The IDF value of 'biden' $f_{2}(\text{'biden'}) = 1 + \log \left( \frac{1,000,000}{1,000} \right) = 4$ . As a final, we select the top- $k$ words with the highest IDF values to use rare words for pre-training rare word-based masked language model in BERT.
107
+
108
+ The third set of rare tokens is created as defined in Eq. 3.
109
+
110
+ $$
111
+ \Phi_ {3} = \left\{r _ {i} \mid r _ {i} = f _ {3} (x) \wedge r _ {i} = t _ {i} ^ {w} \right\} \tag {3}
112
+ $$
113
+
114
+ , where function $f_{3}$ performs two steps. In the first step, the contextualized vector $v$ corresponding to each word $x$ is obtained using the pre-trained BERT and is projected into the latent semantic space. This is, $f_{BERT}(x) = v$ . In the second step, all contextualized vectors are clustered in the semantic space. Since we do not know in advance how many clusters exist in the semantic space, we must use one of unsupervised clustering methods. In this work, we use Expectation-Maximization (EM), in which clustering is performed by calculating the probability of points generated by $k$ Gaussian mixture models. In Expectation step (E-step), compute $P(C_{j}|v_{i}) = \frac{P(C_{j})P(v_{i}|C_{j})}{\Sigma_{l=1}^{k}P(C_{l})P(v_{i}|C_{l})}$ . In Maximization step (M-step), for every cluster (e.g., a cluster $C_{j}$ ), update the weight, mean, and standard deviation of the cluster by $P(C_{j}) = \frac{1}{n}\Sigma_{i=1}^{n}P(C_{j}|v_{i})$ , $\mu_{C_{j}} = \frac{\Sigma_{i=1}^{n}(v_{i} - \mu_{C_{j}})^{2}P(C_{j}|v_{i})}{\Sigma_{i=1}^{n}P(C_{j}|v_{i})}$ , and $\sigma_{C_{j}} = \frac{\Sigma_{i=1}^{n}(v_{i} - \mu_{C_{j}})^{2}P(C_{j}|v_{i})}{\Sigma_{i=1}^{n}P(C_{j}|v_{i})}$ , using $P(C_{j}|v_{i})$ updated in E-step.
115
+
116
+ Figure 1 illustrates the output of $f_{3}$ . There exist three clusters of contextualized vectors. For convenience, we denote the clusters as green, blue, and red clusters. The red cluster has a relatively small cluster size compared to the green and blue clusters. This means that the word vectors in the green
117
+
118
+ and blue clusters are related to common topics and expressions. Words corresponding to such vectors are likely to appear frequently in a corpus. On the other hand, words belonging to the red cluster are relatively likely to be rare words in the corpus. Therefore, to extract rare words through $f_{3}$ , we focus on the smallest cluster $C_{s}$ (the red cluster in Figure 1). Finally, we select only words corresponding to the contextualized vectors belong to $C_{s}$ . Assuming that there are three clusters $C_{1}, C_{2}$ and $C_{3}$ , where $C_{s} = C_{1}$ , all selected words must satisfy Eq. 4.
119
+
120
+ $$
121
+ \{v \mid v \in C _ {s} \wedge v \notin \left(C _ {2} \cup C _ {3}\right) \} \tag {4}
122
+ $$
123
+
124
+ The fourth set $\Phi_4$ is the union of sets $\Phi_2$ and $\Phi_3$ . In general, rare tokens in $\Phi_2$ are extracted in terms of lexical representation, while those from $\Phi_3$ are extracted in terms of semantic representation. Therefore, if BERT is pre-trained by rare word-based mask language model using $\Phi_4$ , we can know how rare words obtained by both representation approaches affect the creativity evaluation of essays. In addition, the fifth set $\Phi_5$ is similar to $\Phi_3$ except using $n$ -grams instead of words. We use 2 for $n$ in our experiments. For instance, in a sentence like "we do not want to squander", in the first step of $f_3$ , $f_{BERT}(x) = v$ , where $x = \text{we do}$ , 'do not', 'not want', 'want to', or 'to squander' and the second step is the same. See Eq. 5.
125
+
126
+ $$
127
+ \Phi_ {5} = \left\{r _ {i} \mid r _ {i} = f _ {3} (x) \wedge r _ {i} = t _ {i} ^ {n = 2} \right\} \tag {5}
128
+ $$
129
+
130
+ We create the sixth set $\Phi_6$ through function $f_{4}$ as shown in Eq. 6.
131
+
132
+ $$
133
+ \Phi_ {6} = \left\{r _ {i} \mid r _ {i} = f _ {4} (x) \wedge r _ {i} = t _ {i} ^ {S} \right\} \tag {6}
134
+ $$
135
+
136
+ , where $x$ is a sequence of words that is the input of $f_4$ . Unlike the functions $f_1, f_2$ , and $f_3$ , it identifies whether a span of text is rare in a corpus. As we already discussed, creative expressions can be clauses, phrases, and even sentences, as well as subwords or words. As $x$ is a sequence of words, we attempt to find a span of $x$ that corresponds to a clause, a phrase, or a sentence. To present $f_4$ , we first use SpanBERT to represent and predict spans of text, training span boundary representations to correctly predict masked span. The final loss function of SpanBERT $L(x_i)$ is to sum the losses from both Span Boundary Objective (SBO) and Masked Language Model (MLM)
137
+
138
+ Objective for each token $x_{i}$ in the masked span $(x_{s},\dots,x_{e})$ , where $x_{s}$ and $x_{e}$ are the boundary tokens. $L(x_{i}) = L_{MLM}(x_{i}) + L_{SBO}(x_{i}) = -\log P(x_{i}|x_{i}) - \log P(x_{i}|x_{s},x_{e},p_{|p(x_{i}) - p(x_{s})|})$ , where $p_{|p(x_i) - p(x_s)|}$ is the relative position between $x_{i}$ 's position $p(x_{i})$ and $x_{i}$ 's position $p(x_{s})$ . See (Joshi et al., 2020) for details. After finding spans through SpanBERT, we use $f_{3}$ in order to detect rare spans in the corpus.
139
+
140
+ The last set of rare tokens is $\Phi_7$ , as defined in Eq. 7.
141
+
142
+ $$
143
+ \Phi_ {7} = \left\{r _ {i} \mid r _ {i} = f _ {5} (x) \wedge r _ {i} = t _ {i} ^ {w} \right\} \tag {7}
144
+ $$
145
+
146
+ Recently, several dictionaries such as Stanford Rare Word, Cambridge Rare Word, Contextual Rare Word, and Definitional Nonce have been open in public. The goal of constructing such dictionaries is to get good embedding vectors from a given corpus, using Word2Vec. The most drawback of existing word embedding models is that frequent words in the corpus can generate good embedding vectors, but not for rare and unseen words. To address this problem, advanced word2vec models such as Morphological Recursive Neural Network (morphoRNN) and Neural Language Model (NLM) (Luong et al., 2013), a linear transformation of additive model $v_{w}^{additive} = \frac{1}{|\Gamma_{w}|}\sum_{\gamma \in \Gamma_{w}}\frac{1}{|\gamma|}\Sigma_{w\in \gamma}v_{w}$ , where $\Gamma_w$ is a context that contains word $w$ , have been presented in NLP.
147
+
148
+ The function $f_{5}$ is to use one of rare word dictionaries. If $x$ matches a rare word in a dictionary, $x$ is added to $\Phi_7$ . For our experiment, we use the Harvard dictionary about rare words collected by Context-sensitive morphoRNN, which is the most representative one in this area.
149
+
150
+ # 3.2 ramBERT
151
+
152
+ In this section, we present our framework called ramBERT for comparative analysis of ACES. Figure 2 depicts the ramBERT model. First, we modify the masked language model of the existing BERT model in which rare tokens extracted through $f_{i} \in F$ discussed in Section 3.1 are masked and predicted in the pre-training step. Unlike existing BERT models, through the language model of ramBERT that masks and predicts rare tokens correctly, it is likely to attend more over rare tokens than over common tokens in texts. Next, the pre-trained BERT model is trained with a training set of essays in the fine-tuning step. In the final step, it automatically classifies each essay in a test
153
+
154
+ ![](images/d483a4cfc1685585f57b9339235ed140e182bfaac769b013534bab0167fcff01.jpg)
155
+ Figure 2: ramBERT: Our framework for comparative analysis of Automated Creative Essay Scoring (ACES).
156
+
157
+ set to be either creative and non-creative.
158
+
159
+ # 3.3 Main Questions
160
+
161
+ In this work, we will investigate the correlation between rare tokens and creative essay evaluation results. If there is any correlation, we will also examine how different types of rare tokens such as subwords, words, $n$ -gram tokens, and spans affect creative essay evaluation. In addition, we plan to look into which type of rare token has the greatest impact on creativity assessment. Furthermore, we will find out which of $F = \{f_1, f_2, f_3, f_4, f_5\}$ for extracting rare tokens is the most effective in evaluating essays for creativity. Please, note that we now have seven sets of rare tokens $\Phi_{i=1\sim7}$ . We will measure the degree of overlap of rare tokens between two sets in order to see how similar they are to each other. We will also investigate how creativity evaluation results change as the percentage of masked rare tokens is gradually increased.
162
+
163
+ # 4 Experimental Set-up
164
+
165
+ For the experiment, we first implemented the five rare token extraction methods. We wrote Python script codes to implement $f_{1}$ using BERTokenizer of Hugging Face, $f_{2}$ and $f_{3}$ using scikit-learn 1.2.0, $f_{4}$ using SpanBERT base model (uncased) in PyTorch, and $f_{5}$ using Stanford, Cambridge, and Contextual Rare Word dictionaries. We also modified TensorFlow code of BERT base model (uncased) for implementing the rare token-based masked language model.
166
+
167
+ A total of 4,079,432 documents from Wikipedia were collected and text was extracted by removing html tags in each document. Such refined text was used as input for BERT's pre-training. To train ramBERT, we used the same default parameters as the BERT base model, where we set 32 to batch size, 10 to epoch, 2e-5 to learning rate, and 0.1 to
168
+
169
+ dropout rate. In addition, we used Adam optimizer and we set 128 to maximum sequence length, 20 to maximum number of predictions per sequence, and 0.1 to masked language mode probability.
170
+
171
+ To fine-tune ramBERT and perform the downstream task, we constructed a training set for creative essay assessment. First, we selected 800 essays at random from Prompt 1 of the ASAP dataset (ASAP, 2022). The topic of the essays is how computers affect people. In the existing ASAP dataset, the essay score ranges from 2 to 12 points, and the higher the essay score, the better the writing, regardless of creativity. Then, each essay was labelled as creative or non-creative by three domain experts who voted to classify each essay as either of creative or non-creative labels.
172
+
173
+ All models were in standalone executed in a high-performance workstation server with Intel Xeon Scalable Silver 4414 2.20GHz CPU with 40 cores, 24GB RAM, 1TB SSD, and GEFORCE RTX 3080 Ti D6 11GB BLOWER with 4,352 CUDA cores, 12GB RAM, and 7GBPS memory clock.
174
+
175
+ # 5 Comparative Analysis
176
+
177
+ # 5.1 Correlation between Rare Tokens and Creative Essay Evaluation Results
178
+
179
+ In our study, the main goal is to see if there is any correlation between rare tokens and creative essay evaluation results. Specifically, we present five rare token extraction methods $F \in \{f_1, f_2, f_3, f_4, f_5\}$ . $f_1$ finds rare tokens based on BPE; $f_2$ on IDF; $f_3$ on clustering contextualized vectors in the semantic space; $f_4$ on SpanBERT; and $f_5$ on existing dictionaries about rare words (e.g., Stanford Rare Word dictionary).
180
+
181
+ Figure 3 shows the average accuracies of ramBERT using $F$ . In the figure, the baseline method is the existing BERT model in which random words are masked and predicted in the pre-training step.
182
+
183
+ ![](images/c3b95667eb5a04f2780c65f41e1ad8082409283183a1e81eb8e8199d5db421cf.jpg)
184
+ Figure 3: Average accuracies of five rare token extraction methods when the top- $30\%$ of rare tokens are masked.
185
+
186
+ The average accuracy of the baseline model is $74\%$ or so. On the other hand, the average accuracy of using $f_{1}, f_{2}, f_{3}, f_{4}$ , and $f_{5}$ is $79.2\%$ , $82.7\%$ , $84.2\%$ , $77.2\%$ , and $74.4\%$ , respectively. Compared to the baseline model, ramBERT using $f_{1}, f_{2}, f_{3}, f_{4}$ , and $f_{5}$ improved the accuracy by about $7\%$ , $12\%$ , $14\%$ , $4\%$ , and $0.5\%$ , respectively. Surprisingly, in all cases, ramBERT significantly improved the accuracy over BERT. These experimental results indicate that even rare tokens extracted by any method $f_{i} \in F$ have a strong influence on creative essay assessment.
187
+
188
+ Especially, among the five methods, rare tokens extracted by $f_{3}$ seem to correlate strongly with creativity evaluation results. Please, note that ramBERT using $f_{3}$ improved the accuracy by up to $14\%$ . Unlike subword-based tokenizing $(f_{1})$ , lexical representation $(f_{2})$ , and advanced word embedding $(f_{5})$ approaches, $f_{3}$ is based on clustering contextualized vectors in the semantic space. This is, rare tokens extracted through this semantic representation approach are more useful than those extracted by other extraction methods. This suggests that rare tokens extracted by considering the context of the corpus are the dominant factor in evaluating creative essays than those extracted using superficial methods such as the lexical representation approach. Furthermore, the semantic representation approach is better than the advanced word embedding method such as Context-sensitive Morphological RNN that is limited to extract rare words by affixes and frequencies.
189
+
190
+ Moreover, our hypothesis in designing $f_{3}$ is that only tokens corresponding to contextualized vectors belonging to the smallest cluster among several
191
+
192
+ <table><tr><td></td><td>Φ1</td><td>Φ2</td><td>Φ3</td><td>Φ4</td><td>Φ5</td><td>Φ6</td><td>Φ7</td></tr><tr><td>Φ1</td><td>100.0</td><td>70.5</td><td>47.7</td><td>69.8</td><td>43.1</td><td>8.1</td><td>9.8</td></tr><tr><td>Φ2</td><td>61.6</td><td>100.0</td><td>49.9</td><td>87.2</td><td>45.4</td><td>8.4</td><td>9.2</td></tr><tr><td>Φ3</td><td>47.8</td><td>57.2</td><td>100.0</td><td>98.9</td><td>79.1</td><td>8.4</td><td>7.5</td></tr><tr><td>Φ4</td><td>46.9</td><td>67.0</td><td>64.5</td><td>100.0</td><td>53.2</td><td>8.4</td><td>7.7</td></tr><tr><td>Φ5</td><td>22.9</td><td>27.5</td><td>41.9</td><td>42.0</td><td>100.0</td><td>6.5</td><td>3.5</td></tr><tr><td>Φ6</td><td>5.1</td><td>6.0</td><td>5.3</td><td>7.9</td><td>7.9</td><td>100.0</td><td>1.0</td></tr><tr><td>Φ7</td><td>38.6</td><td>41.4</td><td>29.5</td><td>45.2</td><td>26.1</td><td>6.3</td><td>100.0</td></tr></table>
193
+
194
+ Table 2: ROUGE-1 of rare token sets.
195
+
196
+ <table><tr><td></td><td>Φ1</td><td>Φ2</td><td>Φ3</td><td>Φ4</td><td>Φ5</td><td>Φ6</td><td>Φ7</td></tr><tr><td>Φ1</td><td>100.0</td><td>46.6</td><td>20.0</td><td>36.6</td><td>0.05</td><td>2.2</td><td>1.0</td></tr><tr><td>Φ2</td><td>40.7</td><td>100.0</td><td>22.0</td><td>67.4</td><td>0.04</td><td>2.4</td><td>1.2</td></tr><tr><td>Φ3</td><td>20.0</td><td>25.3</td><td>100.0</td><td>58.8</td><td>0.09</td><td>2.3</td><td>0.7</td></tr><tr><td>Φ4</td><td>24.6</td><td>51.9</td><td>39.4</td><td>100.0</td><td>0.03</td><td>2.6</td><td>0.9</td></tr><tr><td>Φ5</td><td>0.02</td><td>0.02</td><td>0.04</td><td>0.02</td><td>100.0</td><td>0.0</td><td>0.0</td></tr><tr><td>Φ6</td><td>1.3</td><td>1.6</td><td>1.4</td><td>2.4</td><td>0.0</td><td>100.0</td><td>0.05</td></tr><tr><td>Φ7</td><td>3.9</td><td>5.1</td><td>2.7</td><td>5.2</td><td>0.0</td><td>0.4</td><td>100.0</td></tr></table>
197
+
198
+ Table 3: ROUGE-2 of rare token sets.
199
+
200
+ ones are considered to be rare in a corpus. Our experimental results showed that this hypothesis can be used to extract rare tokens that are helpful in evaluating creative essays. Consequently, Eq. 4 has been experimentally shown to be valid.
201
+
202
+ As shown in Figure 3, we can observe that rare tokens, regardless of their form, such as subwords, words, $n$ -gram tokens, and spans, have a great impact on creativity evaluation results. However, among subwords, words, $n$ -gram tokens ( $n = 2$ in our experiments), and spans, the word-based rare tokens have the greatest impact on creative essay evaluation. The average accuracies of ramBERT with subwords ( $\Phi_1$ ), words ( $\Phi_2 / \Phi_3 / \Phi_4 / \Phi_7$ ), $n$ -gram tokens ( $\Phi_5$ ), and spans ( $\Phi_6$ ) are $79\%$ , $82.7\% / 84.2\% / 83.1\% / 74.3\%$ , $79.6\%$ , and $77.2\%$ , when the top- $30\%$ of rare tokens are masked. Interestingly, the accuracy of ramBERT using $\Phi_4$ , the union set of rare tokens extracted by both the lexical ( $f_2$ ) and semantic representation ( $f_3$ ) approaches, dropped by about $1.1\%$ . This indicates that using the semantic representation approach alone is more effective than combining lexical and semantic approaches. Another interesting point is that using word-based rare tokens improved ramBERT's accuracy rather than those in the form of $n$ -gram tokens and spans. From these experimental results, we make sure that word-based tokens are more effective than other types of rare tokens because they are an important primitive for context understanding. For lack of space, we will discuss experimental results in more detail in Appendix A.
203
+
204
+ # 5.2 Characteristics of Extracted Rare Tokens
205
+
206
+ As some examples of rare tokens, $\Phi_1 = \{\# \# \text{agen}\},$ #icult', 'dar'}, $\Phi_{2} = \{\text{determination},\text{quar}-$
207
+
208
+ <table><tr><td></td><td>Φ1</td><td>Φ2</td><td>Φ3</td><td>Φ4</td><td>Φ5</td><td>Φ6</td><td>Φ7</td></tr><tr><td>Φ1</td><td>100.0</td><td>70.5</td><td>47.7</td><td>69.7</td><td>41.8</td><td>8.0</td><td>9.8</td></tr><tr><td>Φ2</td><td>61.5</td><td>100.0</td><td>49.9</td><td>87.2</td><td>44.4</td><td>8.4</td><td>9.1</td></tr><tr><td>Φ3</td><td>47.7</td><td>57.2</td><td>100.0</td><td>98.9</td><td>77.9</td><td>8.4</td><td>7.4</td></tr><tr><td>Φ4</td><td>46.8</td><td>67.0</td><td>64.5</td><td>100.0</td><td>52.3</td><td>8.4</td><td>7.7</td></tr><tr><td>Φ5</td><td>22.2</td><td>26.9</td><td>41.3</td><td>41.3</td><td>100.0</td><td>5.3</td><td>3.3</td></tr><tr><td>Φ6</td><td>5.0</td><td>6.0</td><td>5.3</td><td>7.9</td><td>6.4</td><td>100.0</td><td>1.0</td></tr><tr><td>Φ7</td><td>38.6</td><td>41.0</td><td>29.3</td><td>44.9</td><td>25.0</td><td>6.3</td><td>100.0</td></tr></table>
209
+
210
+ Table 4: ROUGE-L of rare token sets.
211
+
212
+ <table><tr><td></td><td>Φ1</td><td>Φ2</td><td>Φ3</td><td>Φ4</td><td>Φ5</td><td>Φ6</td><td>Φ7</td></tr><tr><td>Φ1</td><td>100.0</td><td>73.7</td><td>58.3</td><td>52.2</td><td>0.0</td><td>23.6</td><td>12.3</td></tr><tr><td>Φ2</td><td>74.3</td><td>100.0</td><td>57.2</td><td>74.1</td><td>0.0</td><td>29.4</td><td>12.1</td></tr><tr><td>Φ3</td><td>58.3</td><td>56.6</td><td>100.0</td><td>61.2</td><td>0.0</td><td>23.0</td><td>9.9</td></tr><tr><td>Φ4</td><td>57.2</td><td>76.9</td><td>67.0</td><td>100.0</td><td>0.0</td><td>38.9</td><td>10.4</td></tr><tr><td>Φ5</td><td>0.0</td><td>0.0</td><td>0.0</td><td>0.0</td><td>100.0</td><td>0.0</td><td>0.0</td></tr><tr><td>Φ6</td><td>26.0</td><td>30.6</td><td>25.3</td><td>38.9</td><td>0.0</td><td>100.0</td><td>6.9</td></tr><tr><td>Φ7</td><td>2.8</td><td>1.8</td><td>2.2</td><td>0.5</td><td>0.0</td><td>0.3</td><td>100.0</td></tr></table>
213
+
214
+ antine', 'forfeiture'}, $\Phi_3 = \{\# \# \text{ian},\# \# \text{vis},$ presumably'}, $\Phi_4 = \{\# \# \text{ian},\# \text{presumably},$ ##gu'}, $\Phi_5 = \{\# \text{accur\_confidence},\# \text{prefer\_}$ #uck', phen_\#uv'}, and $\Phi_6 = \{\# \text{prepared memorandum found in},\# \text{engage in genuine consultations},\# \text{pesti}.$ lential burning wind called by'}, and $\Phi_7 = \{\# \text{untracked},\# \text{apocalyptic},\# \text{confinement}\}$ . Unlike other tokens, most tokens in $\Phi_7$ seems to be extremely rare tokens across corpora. All rare tokens, regardless of their form, such as subwords, words, $n$ grams, and spans, are tokenized into subwords that are masked in ramBERT. For example, token 'prodigious' extracted by $f_{3}$ is tokenized into three subwords 'pro', '#dig', and '#ious' which are masked for pre-training ramBERT.
215
+
216
+ Tables $2\sim 5$ show similarity values for pairs of sets. To measure the similarities, we used ROUGE-1/2/L as recall-based measure and BLEU as precision-based measure. Since both ROUGE and BLEU are the unsymmetrical metrics, the results of $\mathrm{sim}(\Phi_i,\Phi_j)$ and $\mathrm{sim}(\Phi_j,\Phi_i)$ are slightly different. In the table, since $\Phi_5$ is a set of $n$ -gram tokens (2-gram in our experiments), where two consecutive tokens are represented as one token, the similarity values are close to zero. Unexpectedly, we observed that rare tokens extracted by $f_{3}$ do not overlap much with those by $f_{2}$ which are similar to those by $f_{1}$ . This means that rare tokens extracted
217
+
218
+ Table 5: BLEU of rare token sets.
219
+
220
+ <table><tr><td></td><td>15%</td><td>30%</td><td>50%</td></tr><tr><td>f1</td><td>77.5</td><td>79.2</td><td>80.1</td></tr><tr><td>f2</td><td>80.2</td><td>82.7</td><td>83.5</td></tr><tr><td>f3</td><td>81.4</td><td>84.2</td><td>85.0</td></tr><tr><td>f4</td><td>75.1</td><td>77.2</td><td>77.9</td></tr><tr><td>f5 (T-Rare)</td><td>74.1</td><td>74.4</td><td>75.1</td></tr></table>
221
+
222
+ Table 6: Accuracies of ramBERT according to different percentage of masked rare tokens.
223
+
224
+ by the lexical representation approach are quite different from those by the semantic representation approach.
225
+
226
+ Table 6 summarizes the accuracies of ramBERT according to different percentage of masked rare tokens. As the percentage of masked rare tokens increases, the accuracy of ramBERT improves, and the accuracy of ramBERT converges when the percentage of masked rare tokens is almost $50\%$ . In $f_{5}$ , we used three different dictionaries about rare words: (1) Harvard Rare Word (H-Rare), (2) Cambridge Rare Word (C-Rare), and (3) Contextual Rare Word (T-Rare). When the top $15\% / 30\% / 50\%$ of rare words are masked, the average accuracy of $f_{5}$ using H-Rare, C-Rare, and T-Rare is $74\% / 74\% / 74.1\%$ , $74.3\% / 74\% / 74.4\%$ , and $75\% / 74.2\% / 75.1\%$ , respectively. These results indicate that there is no significant difference in accuracy when using the three dictionaries.
227
+
228
+ # 6 Conclusion
229
+
230
+ In this paper, we proposed a method to detect creative essay writings by using a ramBERT (i.e. rare token masking-based BERT). We used seven different rare token sets and pre-trained a BERT after masking with the rare tokens on a large data. Our preliminary experimental results show that rare tokens are highly correlated with the creativity essay scores. Consequently, the ramBERT improved the accuracy up to $14\%$ compared to a regular BERT which is based on random word masking. The performance improvements are also shown in ROGUE and BLUE scores.
231
+
232
+ # Limitations
233
+
234
+ We used the ASAP data set to evaluate the performance of the proposed method. Although the dataset is well known and widely used, it has two major limitations. At first, the data size is small. Even with pre-training the model with a decently large data set (e.g., Wikipedia), the interpretation of experimental results are limited by the data size. The second limitation is an inherited bias in the data set. Since the ASAP data set is labeled by human raters, the data set is biased by personal preferences. At last, the proposed approach requires a reasonably large pre-processing to extract all the additional features which hinders a scalability. Additionally, our work is limited to only measure creativity in expression but not in content.
235
+
236
+ # Acknowledgements
237
+
238
+ This work was supported in part by the National Research Foundation of Korea (NRF) Grant by Korean Government through the Ministry of Science and ICT (MSIT) under Grant NRF2022R1A2C1011404.
239
+
240
+ # References
241
+
242
+ Saeda A Al Awaida, Bassam Al-Shargabi, and Thamer Al-Rousan. 2019. Automated arabic essay grading system based on f-score and arabic worldnet. *Jordanian Journal of Computers and Information Technology*, 5(3).
243
+ Maram F Al-Jouie and Aqil M Azmi. 2017. Automated evaluation of school children essays in arabic. *Proceeding Computer Science*, 117:19-22.
244
+ Mansour Alghamdi, Mohamed Alkanhal, Mohamed Al-Badrashiny, Abdulaziz Al-Qabbany, Ali Areshey, and Abdulaziz Alharbi. 2014. A hybrid automatic scoring system for arabic essays. *Ai Communications*, 27(2):103-111.
245
+ A Amalia, D Gunawan, Y Fithri, and I Aulia. 2019. Automated bahasa indonesia essay evaluation with latent semantic analysis. In Journal of Physics: Conference Series, volume 1235, page 012100. IOP Publishing.
246
+ Reinald Kim Amplayo, Seung-won Hwang, and Min Song. 2019. Evaluating research novelty detection: Counterfactual approaches. In Proceedings of the thirteenth workshop on graph-based methods for natural language processing (TextGraphs-13), pages 124-133.
247
+ ASAP. 2022. Automated student assessment prize (asap).
248
+ Yigal Attali and Jill Burstein. 2006. Automated essay scoring with e-rater v. 2. The Journal of Technology, Learning and Assessment, 4(3).
249
+ Roger E Beaty and Dan R Johnson. 2021. Automating creativity assessment with semdis: An open platform for computing semantic distance. Behavior research methods, 53(2):757-780.
250
+ Bimal Bhattacharai, Ole-Christoffer Granmo, and Lei Jiao. 2020. Measuring the novelty of natural language text using the conjunctive clauses of a tsetlin machine text classifier. arXiv preprint arXiv:2011.08755.
251
+ Yue Cao, Hanqi Jin, Xiaojun Wan, and Zhiwei Yu. 2020. Domain-adaptive neural automated essay scoring. In Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 1011-1020.
252
+ Hongbo Chen and Ben He. 2013. Automated essay scoring by maximizing human-machine agreement.
253
+
254
+ In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1741-1752.
255
+ Minah Cheon, Hyeong-Won Seo, Jae-Hoon Kim, Eun-Hee Noh, Kyung-Hee Sung, and EunYong Lim. 2015. An automated scoring tool for korean supply-type items based on semi-supervised learning. In Proceedings of the 2nd Workshop on Natural Language Processing Techniques for Educational Applications, pages 59–63.
256
+ Renukswamy Chikkamath, Markus Endres, Lavanya Bayyapu, and Christoph Hewel. 2020. An empirical study on patent novelty detection: A novel approach using machine learning and natural language processing. In 2020 Seventh International Conference on Social Networks Analysis, Management and Security (SNAMS), pages 1-7. IEEE.
257
+ Clément Christophe, Julien Velcin, Jairo Cugliari, Manel Boumghar, and Philippe Suignard. 2021. Monitoring geometrical properties of word embeddings for detecting the emergence of new topics. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 994-1003, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
258
+ Clement Christophe, Julien Velcin, Jairo Cugliari, Philippe Suignard, and Manel Boumghar. 2020. How to detect novelty in textual data streams? a comparative study of existing methods. In International Workshop on Advanced Analysis and Learning on Temporal Data, pages 110-125. Springer.
259
+ Jennifer O Contreras, Shadi Hilles, and Zainab Binti Abubakar. 2018. Automated essay scoring with ontology based on text mining and nltk tools. In 2018 International Conference on Smart Computing and Electronic Enterprise (ICSCEE), pages 1-6. IEEE.
260
+ Mădalina Cozma, Andrei M Butnaru, and Radu Tudor Ionescu. 2018. Automated essay scoring with string kernels and word embeddings. arXiv preprint arXiv:1804.07954.
261
+ Tirthankar Dasgupta, Abir Naskar, Lipika Dey, and Rupsa Saha. 2018. Augmenting textual qualitative features in deep convolution recurrent neural network for automatic essay scoring. In Proceedings of the 5th Workshop on Natural Language Processing Techniques for Educational Applications, pages 93-102.
262
+ Simona Doboli, Jared Kenworthy, Paul Paulus, Ali Minai, and Alex Doboli. 2020. A cognitive inspired method for assessing novelty of short-text ideas. In 2020 International Joint Conference on Neural Networks (IJCNN), pages 1-8. IEEE.
263
+ Fei Dong and Yue Zhang. 2016. Automatic features for essay scoring—an empirical study. In Proceedings of the 2016 conference on empirical methods in natural language processing, pages 1072-1077.
264
+
265
+ Fei Dong, Yue Zhang, and Jie Yang. 2017. Attention-based recurrent convolutional neural network for automatic essay scoring. In Proceedings of the 21st conference on computational natural language learning (CoNLL 2017), pages 153-162.
266
+ Youmna Farag, Helen Yannakoudakis, and Ted Briscoe. 2018. Neural automated essay scoring and coherence modeling for adversarially crafted input. arXiv preprint arXiv:1804.06898.
267
+ Tirthankar Ghosal, Vignesh Edithal, Asif Ekbal, Pushpak Bhattacharyya, George Tsatsaronis, and Srinivasa Satya Sameer Kumar Chivukula. 2018a. Novelty goes deep: a deep neural solution to document level novelty detection. In Proceedings of the 27th international conference on Computational Linguistics, pages 2802-2813.
268
+ Tirthankar Ghosal, Tanik Saikh, Tameesh Biswas, Asif Ekbal, and Pushpak Bhattacharyya. 2022. Novelty detection: A perspective from natural language processing. Computational Linguistics, 48(1):77-117.
269
+ Tirthankar Ghosal, Amitra Salam, Swati Tiwari, Asif Ekbal, and Pushpak Bhattacharyya. 2018b. Tap-dlnd 1.0: A corpus for document level novelty detection. arXiv preprint arXiv:1802.06950.
270
+ Aurelie Herbelot and Marco Baroni. 2017. High-risk learning: acquiring new word vectors from tiny data. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 304-309, Copenhagen, Denmark. Association for Computational Linguistics.
271
+ Jun Imaki, Shunichi Ishihara, et al. 2013. Experiment- ing with a japanese automated essay scoring system in the 12 japanese environment. Papers in Language Testing and Assessment, 2(2):28-47.
272
+ Tsunenori Ishioka and Masayuki Kameda. 2006. Automated Japanese essay scoring system based on articles written by experts. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, pages 233-240.
273
+ Cancan Jin and Ben He. 2015. Utilizing latent semantic word representations for automated essay scoring. In 2015 IEEE 12th Intl Conf on Ubiquitous Intelligence and Computing and 2015 IEEE 12th Intl Conf on Autonomic and Trusted Computing and 2015 IEEE 15th Intl Conf on Scalable Computing and Communications and Its Associated Workshops (UIC-ATC-ScalCom), pages 1101-1108. IEEE.
274
+ Cancan Jin, Ben He, Kai Hui, and Le Sun. 2018. Tdnn: a two-stage deep neural network for prompt-independent automated essay scoring. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1088-1097.
275
+
276
+ Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S. Weld, Luke Zettlemoyer, and Omer Levy. 2020. Span-BERT: Improving pre-training by representing and predicting spans. Transactions of the Association for Computational Linguistics, 8:64-77.
277
+ Tuomo Kakkonen, Niko Myller, Jari Timonen, and Erkki Sutinen. 2005. Automatic essay grading with probabilistic latent semantic analysis. In Proceedings of the second workshop on Building Educational Applications Using NLP, pages 29-36.
278
+ Mikhail Khodak, Nikunj Saunshi, Yingyu Liang, Tengyu Ma, Brandon Stewart, and Sanjeev Arora. 2018. A la carte embedding: Cheap but effective induction of semantic feature vectors. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 12-22, Melbourne, Australia. Association for Computational Linguistics.
279
+ Leah S Larkey. 1998. Automatic essay grading using text categorization techniques. In Proceedings of the 21st annual international ACM SIGIR conference on Research and development in information retrieval, pages 90-95.
280
+ Guoxi Liang, Byung-Won On, Dongwon Jeong, Ali Asghar Heidari, Hyun-Chul Kim, Gyu Sang Choi, Yongchuan Shi, Qinghua Chen, and Huiling Chen. 2021. A text gan framework for creative essay recommendation. Knowledge-Based Systems, 232:107501.
281
+ Guoxi Liang, Byung-Won On, Dongwon Jeong, Hyun-Chul Kim, and Gyu Sang Choi. 2018. Automated essay scoring: A siamese bidirectional LSTM neural network architecture. Symmetry, 10(12):682.
282
+ Yu-Ju Lu, Bor-Chen Kuo, and Kai-Chih Pai. 2017. Developing chinese automated essay scoring model to assess college students' essay quality. In *EDM*.
283
+ Thang Luong, Richard Socher, and Christopher Manning. 2013. Better word representations with recursive neural networks for morphology. In Proceedings of the Seventeenth Conference on Computational Natural Language Learning, pages 104-113, Sofia, Bulgaria. Association for Computational Linguistics.
284
+ Sandeep Mathias, Rudra Murthy, Diptesh Kanojia, Abhijit Mishra, and Pushpak Bhattacharyya. 2020. Happy are those who grade without seeing: A multitask learning approach to grade essays using gaze behaviour. arXiv preprint arXiv:2005.12078.
285
+ Elijah Mayfield and Alan W Black. 2020. Should you fine-tune bert for automated essay scoring? In Proceedings of the Fifteenth Workshop on Innovative Use of NLP for Building Educational Applications, pages 151-162.
286
+ Panitan Muangkammuen and Fumiyo Fukumoto. 2020. Multi-task learning for automated essay scoring with sentiment analysis. In Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language
287
+
288
+ Processing: Student Research Workshop, pages 116-123.
289
+ Dipannyta Nandi and Rohini Basak. 2020. A quest to detect novelty using deep neural nets. In 2020 11th International Conference on Computing, Communication and Networking Technologies (ICCCNT), pages 1-7. IEEE.
290
+ Huyen Nguyen and Lucio Dery. 2016. Neural networks for automated essay grading. CS224d Stanford Reports, pages 1-11.
291
+ Myeong-Wan Noh and Rayeon Kim. 2008. A study on analysis of leaner responses in web board-based reading discussions. Journal of Korea Reading Association, 20(1):171-199.
292
+ Xingyuan Peng, Dengfeng Ke, Zhenbiao Chen, and Bo Xu. 2010. Automated chinese essay scoring using vector space models. In 2010 4th International Universal Communication Symposium, pages 149-153. IEEE.
293
+ Peter Phandi, Kian Ming A Chai, and Hwee Tou Ng. 2015. Flexible domain adaptation for automated essay scoring using correlated linear regression. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 431-439.
294
+ Mohammad Taher Pilehvar, Dimitri Kartsaklis, Victor Prokhorov, and Nigel Collier. 2018. Card-660: Cambridge rare word dataset - a reliable benchmark for infrequent word representation models. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1391-1401, Brussels, Belgium. Association for Computational Linguistics.
295
+ J. A. Plucker, R. A. Beghetto, and G. T. Dow. 2004. Why isn't creativity more important to educational psychologists? potentials, pitfalls, and future directions in creativity research. Educational Psychologist, 39(2):83-96.
296
+ VV Ramalingam, A Pandian, Prateek Chetry, and Himmanshu Nigam. 2018. Automated essay grading using machine learning algorithm. In Journal of Physics: Conference Series, volume 1000, page 012030. IOP Publishing.
297
+ Anak Agung Putri Ratna, Adam Arsy Arbani, Ihsan Ibrahim, F Astha Ekadiyanto, Kristofer Jehezkiel Bangun, and Prima Dewi Purnamasari. 2018. Automatic essay grading system based on latent semantic analysis with learning vector quantization and word similarity enhancement. In Proceedings of the 2018 International Conference on Artificial Intelligence and Virtual Reality, pages 120-126.
298
+ Anak Agung Putri Ratna, Bagio Budiardjo, and Djoko Hartanto. 2007. Simple: System automatic essay assessment for Indonesian language subject examination. Makara Journal of Technology, 11(1):2.
299
+
300
+ Anak Agung Putri Ratna, Aaliyah Kaltsum, Lea Santiar, Hanifah Khairunissa, Ihsan Ibrahim, and Prima Dewi Purnamasari. 2019a. Term frequency-inverse document frequency answer categorization with support vector machine on automatic short essay grading system with latent semantic analysis for japanese language. In 2019 International Conference on Electrical Engineering and Computer Science (ICECOS), pages 293-298. IEEE.
301
+ Anak Agung Putri Ratna, Hanifah Khairunissa, Aaliyah Kaltsum, Ihsan Ibrahim, and Prima Dewi Purnamasari. 2019b. Automatic essay grading for bahasa indonesia with support vector machine and latent semantic analysis. In 2019 International Conference on Electrical Engineering and Computer Science (ICE-COS), pages 363-367. IEEE.
302
+ Anak Agung Putri Ratna, Prima Dewi Purnamasari, and Boma Anantasatya Adhi. 2015. Simple-o, the essay grading system for Indonesian language using lsa method with multi-level keywords. In The Asian Conference on Society, Education & Technology, pages 155-164.
303
+ Anak Agung Putri Ratna, Lea Santiar, Ihsan Ibrahim, Prima Dewi Purnamasari, Dyah Lalita Luhurkinanti, and Adisa Larasati. 2019c. Latent semantic analysis and winnowing algorithm based automatic japanese short essay answer grading system comparative performance. In 2019 IEEE 10th International Conference on Awareness Science and Technology (iCAST), pages 1-7. IEEE.
304
+ Robert Ridley, Liang He, Xin-yu Dai, Shujian Huang, and Jiajun Chen. 2021. Automated cross-prompt scoring of essay traits. In Proceedings of the AAAI conference on artificial intelligence, volume 35, pages 13745-13753.
305
+ Pedro Uria Rodriguez, Amir Jafari, and Christopher M Ormerod. 2019. Language models and automated essay scoring. arXiv preprint arXiv:1909.09482.
306
+ Lawrence M Rudner and Tahung Liang. 2002. Automated essay scoring using bayes' theorem. The Journal of Technology, Learning and Assessment, 1(2).
307
+ Abdulaziz Shehab, Mahmoud Faroun, and Magdi Rashad. 2018. An automatic arabic essay grading system based on text similarity algorithms. International Journal of Advanced Computer Science and Applications, 9(3).
308
+ Edwin Simpson, Erik-Lan Do Dinh, Tristan Miller, and Iryna Gurevych. 2019. Predicting humorousness and metaphor novelty with gaussian process preference learning. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5716-5728.
309
+ Wei Song, Kai Zhang, Ruiji Fu, Lizhen Liu, Ting Liu, and Miaomiao Cheng. 2020. Multi-stage pre-training for automated chinese essay scoring. In Proceedings of the 2020 Conference on Empirical Methods in
310
+
311
+ Natural Language Processing (EMNLP), pages 6723-6733.
312
+ R. J. Sternberg and T. I. Lubart. 1999. The Concept of Creativity: Prospects and Paradigms. Cambridge University Press.
313
+ Kaveh Taghipour and Hwee Tou Ng. 2016. A neural approach to automated essay scoring. In Proceedings of the 2016 conference on empirical methods in natural language processing, pages 1882-1891.
314
+ Yi Tay, Minh Phan, Luu Anh Tuan, and Siu Cheung Hui. 2018. Skipflow: Incorporating neural coherence features for end-to-end automatic text scoring. In Proceedings of the AAAI conference on artificial intelligence, volume 32.
315
+ E. P. Torrance. 1974. The Torrance Tests of Creative Thinking: Norms-technical Manual Princeton. NJ: Personal Press.
316
+ Masaki Uto, Yikuan Xie, and Maomi Ueno. 2020. Neural automated essay scoring incorporating handcrafted features. In Proceedings of the 28th International Conference on Computational Linguistics, pages 6077-6088.
317
+ Yongjie Wang, Chuan Wang, Ruobing Li, and Hui Lin. 2022. On the use of bert for automated essay scoring: Joint learning of multi-scale essay representation. arXiv preprint arXiv:2205.03835.
318
+ Yucheng Wang, Zhongyu Wei, Yaqian Zhou, and Xuan-Jing Huang. 2018. Automatic essay scoring incorporating rating schema via reinforcement learning. In Proceedings of the 2018 conference on empirical methods in natural language processing, pages 791-797.
319
+ Wee Sian Wong and Chih How Bong. 2019. A study for the development of automated essay scoring (aes) in malaysian english test environment. International Journal of Innovative Computing, 9(1).
320
+ Lijun Wu, Juntao Li, Yue Wang, Qi Meng, Tao Qin, Wei Chen, Min Zhang, Tie-Yan Liu, et al. 2021. R-drop: Regularized dropout for neural networks. Advances in Neural Information Processing Systems, 34:10890-10905.
321
+ Yanyan Xu, Dengfeng Ke, and Kaile Su. 2017. Contextualized latent semantic indexing: A new approach to automated chinese essay scoring. Journal of Intelligent Systems, 26(2):263-285.
322
+ Ruosong Yang, Jiannong Cao, Zhiyuan Wen, Youzheng Wu, and Xiaodong He. 2020. Enhancing automated essay scoring performance via fine-tuning pre-trained language models with combination of regression and ranking. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 1560-1569.
323
+ Helen Yannakoudakis, Ted Briscoe, and Ben Medlock. 2011. A new dataset and method for automatically grading esol texts. In Proceedings of the 49th annual meeting of the association for computational linguistics: human language technologies, pages 180-189.
324
+
325
+ # A Appendix
326
+
327
+ Table 7 shows one of high-school student's essays in the ASAP dataset. Interestingly, the baseline model classified it to be non-creative. However, no matter which rare token extraction methods are used, the ramBERT model classified it to be creative. Each token in $d_{i}$ is counted as 1 if it is included in the set of tokens generated by each rare token extraction method.
328
+
329
+ The number of tokens in $d_{i}$ matched to sets $\Phi_1$ , $\Phi_2$ , $\Phi_3$ , $\Phi_6$ , and $\Phi_7$ are 6, 7, 11, 4, and 0, respectively. The tokens matched to $\Phi_1$ are 'contr', 'distract', 'gorgeous', 'wr', '#fi', and 'distract', where the token like 'distract' appears twice in $d_{i}$ . Those matched to $\Phi_2$ are 'controversial', 'distraction', 'exposer', 'unhealthy', 'gorgeous', 'beneficial', and 'tempting'. Those matched to $\Phi_3$ are 'contr', 'concern', 'concern', 'expose', 'gorgeous', 'concern', 'bene', '#fi', '#st', 'tempt', '#itely', and '#uter'. Those matched to $\Phi_6$ are 'controversial issue in my', 'accessing anything', 'serious concern to', and 'tempting'.
330
+
331
+ In particular, the reason why the number of tokens matched with $\Phi_7$ is 0 that the number of rare words in $\Phi_7$ is as small as 2,034. Moreover, since the rare words in the Harvard dictionary were generated primarily by affixes and frequencies, it is unlikely that such rare words would appear across several domains. The examples of the Harvard dictionary are 'untracked', 'unflagging', 'unprecedented', 'apocalyptic', 'organismal', 'diagonal', 'obtainment', 'discernment', and 'confinement', where the underlined parts of the rare words are affixes.
332
+
333
+ The tokens that match $\Phi_2$ , such as 'gorgeous', 'beneficial', and 'tempting', appear to be lexically rare tokens in the corpus of essays. Most tokens matched with $\Phi_3$ , such as 'gorgeous', 'bene', '#fi', and 'tempt', are similar to them matched with $\Phi_2$ , but the number of rare tokens is relatively large. This semantic representation method ( $f_3$ ) tends to extract more rare tokens in addition to the rare tokens extracted through the lexical representation method ( $f_2$ ). Therefore, hidden rare tokens that could not be extracted by the existing lexical representation methods can be extracted through the semantic representation method like $f_3$ .
334
+
335
+ From the experimental results, we can carefully hypothesize that an essay might be creative expressively if there are many rare tokens in it. Since a detailed discussion of this hypothesis is beyond the
336
+
337
+ scope of this paper, we do not proceed further here. Instead, we will deeply investigate the validity of this hypothesis through additional in-depth studies. In addition, we will attempt to establish a theory for the hypothesis.
338
+
339
+ Similarly, we observed in our experimental results that there is a correlation between essay scores and creative essays. The essay scores are evaluators' scores for how well writing is written, regardless of creativity. In the ASAP dataset, the essay scores range from 2 to 12 points, and the higher the score, the better the essay. The essay score of the essay shown in Table 7 is 12 points as well. This is because evaluators give proper scores to student essays in terms of grammar, expressiveness, and composition of writing, but in addition to them, if there is novelty in expression or content, the essay tends to be given a higher score.
340
+
341
+ I think we can all agree that computer usage is a very controversial issue in my opinion. I believe that computers have a negative effect on people. For instance, it's not safe and children can get into all sorts of things on the Internet. Also, people spend too much time in front the computer now a days it's a major distraction and also a negative effect on kids school work. It's now or never do we decide that computers have a negative effect. You decide isn't every parents biggest concern the safety of their children. When on the Internet kids are capable of accessing anything and everything. Sometimes kids don't even look for bad things they just pop up. Would you want your child viewing things that you have no control over. Also, websites like com one of the greatest concerns when it comes to Internet safety. Although you are supposed to be at least to have a most kids lie about their age. Did you know that out of users lie about their age. And it's not always a year old saying they are it could be a year old saying they're. Not only do people lie about their age they lie about who they are. Is this the kind of Internet exposer you want for you children put a stop to this right now. More than of are overweight and unhealthy. This is another negative effect computers have on people. It's a gorgeous day Bright blue skies cotton candy clouds the sun is shining and there's a nice warm breeze Perfect day to go out and get active right Wrong. None people would be inside on the computer instead of going for a walk people would spend hours on Facebook. This is a serious concern to our health. People don't exercise enough as it is and then when you add computers, people will never get active instead of playing video games online people need to be reminded that turning off the computer and playing a fun neighbourhood game of baseball is just as fun and much more beneficial. This is just one step need to take to get a healthier lifestyle. Wouldn't you agree? Did you know that kids that spend more time on computer are more likely to do poorly in school. Surely if nothing else will convince you of the negative effects of a computer, this will than coming home and doing homework more time is spent in front of the computer. As a student, I will admit that the computer is a very tempting distraction and can easily pull a student away from their studies. You can't expect a child to make the right decision and tell their they have to go because they need to study. So you do take action now, or your child will definitely suffer. The time has come to decide. Do you believe computers have a negative effect on people. It's clear that the computer is not safe. Not to mention too much time is spent on the computer instead of being active. Most importantly, computers will negatively affect children's grades. Don't wait another minute. Let's agree and do something about this.
342
+
343
+ A For every submission:
344
+
345
+ A1. Did you describe the limitations of your work? 7
346
+ A2. Did you discuss any potential risks of your work?
347
+ A3. Do the abstract and introduction summarize the paper's main claims?
348
+ A4. Have you used AI writing assistants when working on this paper? Left blank.
349
+
350
+ B Did you use or create scientific artifacts?
351
+
352
+ 3
353
+
354
+ B1. Did you cite the creators of artifacts you used? 24
355
+ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Left blank.
356
+ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? 3
357
+ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? 34
358
+ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? 34
359
+ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. 4
360
+
361
+ C Did you run computational experiments?
362
+
363
+ 4
364
+
365
+ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? 4
366
+
367
+ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
368
+
369
+ 4
370
+
371
+ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
372
+
373
+ 5
374
+
375
+ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)?
376
+
377
+ 4
378
+
379
+ D Did you use human annotators (e.g., crowdworkers) or research with human participants?
380
+
381
+ 4
382
+
383
+ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
384
+
385
+ 4
386
+
387
+ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)?
388
+
389
+ 4
390
+
391
+ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
392
+
393
+ 4
394
+
395
+ D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
396
+
397
+ 4
398
+
399
+ D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
400
+
401
+ 4
2023/A Comparative Analysis of the Effectiveness of Rare Tokens on Creative Expression using ramBERT/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dcc8e8041cb71b8cb8e70745b39d2686f4c5ae21817e1795536399a419897c1e
3
+ size 200859
2023/A Comparative Analysis of the Effectiveness of Rare Tokens on Creative Expression using ramBERT/layout.json ADDED
The diff for this file is too large to render. See raw diff
 
2023/A Confidence-based Partial Label Learning Model for Crowd-Annotated Named Entity Recognition/1bad24cc-7495-4411-88f1-9e429d01bb74_content_list.json ADDED
@@ -0,0 +1,2047 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "type": "text",
4
+ "text": "A Confidence-based Partial Label Learning Model for Crowd-Annotated Named Entity Recognition",
5
+ "text_level": 1,
6
+ "bbox": [
7
+ 218,
8
+ 79,
9
+ 779,
10
+ 118
11
+ ],
12
+ "page_idx": 0
13
+ },
14
+ {
15
+ "type": "text",
16
+ "text": "Limalo Xiong $^{1}$ , Jie Zhou $^{1*}$ , Qunxi Zhu $^{2}$ , Xiao Wang $^{1}$ , Yuanbin Wu $^{3}$ , Qi Zhang $^{1}$ , Tao Gui $^{4}$ , Xuanjing Huang $^{1}$ , Jin Ma $^{5}$ , Ying Shan $^{5}$",
17
+ "bbox": [
18
+ 166,
19
+ 124,
20
+ 840,
21
+ 158
22
+ ],
23
+ "page_idx": 0
24
+ },
25
+ {
26
+ "type": "text",
27
+ "text": "$^{1}$ School of Computer Science, Fudan University",
28
+ "bbox": [
29
+ 319,
30
+ 159,
31
+ 687,
32
+ 175
33
+ ],
34
+ "page_idx": 0
35
+ },
36
+ {
37
+ "type": "text",
38
+ "text": "$^{2}$ Research Institute of Intelligent Complex Systems, Fudan University",
39
+ "bbox": [
40
+ 240,
41
+ 175,
42
+ 763,
43
+ 191
44
+ ],
45
+ "page_idx": 0
46
+ },
47
+ {
48
+ "type": "text",
49
+ "text": "<sup>3</sup> The Department of Computer Science and Technology, East China Normal University",
50
+ "bbox": [
51
+ 179,
52
+ 192,
53
+ 826,
54
+ 209
55
+ ],
56
+ "page_idx": 0
57
+ },
58
+ {
59
+ "type": "text",
60
+ "text": "$^{4}$ Institute of Modern Languages and Linguistics, Fudan University",
61
+ "bbox": [
62
+ 257,
63
+ 209,
64
+ 751,
65
+ 225
66
+ ],
67
+ "page_idx": 0
68
+ },
69
+ {
70
+ "type": "text",
71
+ "text": "Applied Research Center (ARC), Tencent PCG",
72
+ "bbox": [
73
+ 322,
74
+ 225,
75
+ 685,
76
+ 241
77
+ ],
78
+ "page_idx": 0
79
+ },
80
+ {
81
+ "type": "text",
82
+ "text": "Abstract",
83
+ "text_level": 1,
84
+ "bbox": [
85
+ 260,
86
+ 252,
87
+ 339,
88
+ 266
89
+ ],
90
+ "page_idx": 0
91
+ },
92
+ {
93
+ "type": "text",
94
+ "text": "Existing models for named entity recognition (NER) are mainly based on large-scale labeled datasets, which always obtain using crowdsourcing. However, it is hard to obtain a unified and correct label via majority voting from multiple annotators for NER due to the large labeling space and complexity of this task. To address this problem, we aim to utilize the original multi-annotator labels directly. Particularly, we propose a Confidence-based Partial Label Learning (CPLL) method to integrate the prior confidence (given by annotators) and posterior confidence (learned by models) for crowd-annotated NER. This model learns a token- and content-dependent confidence via an Expectation-Maximization (EM) algorithm by minimizing empirical risk. The true posterior estimator and confidence estimator perform iteratively to update the true posterior and confidence respectively. We conduct extensive experimental results on both real-world and synthetic datasets, which show that our model can improve performance effectively compared with strong baselines.",
95
+ "bbox": [
96
+ 141,
97
+ 278,
98
+ 460,
99
+ 619
100
+ ],
101
+ "page_idx": 0
102
+ },
103
+ {
104
+ "type": "text",
105
+ "text": "1 Introduction",
106
+ "text_level": 1,
107
+ "bbox": [
108
+ 114,
109
+ 631,
110
+ 260,
111
+ 645
112
+ ],
113
+ "page_idx": 0
114
+ },
115
+ {
116
+ "type": "text",
117
+ "text": "Named entity recognition (NER) plays a fundamental role in many downstream natural language processing (NLP) tasks, such as relation extraction (Bach and Badaskar, 2007), event extraction (Wadden et al., 2019; Zhou et al., 2022). Recently, by leveraging deep learning models, existing NER systems have witnessed superior performances on NER datasets. However, these models typically require a massive amount of labeled training data, such as MSRA (Levow, 2006), Ontonotes 4.0 (Weischedel et al., 2011), and Resume (Zhang and Yang, 2018). In real applications, we often need to consider new types of entities in new domains where we do not have existing annotated. The majority way to label the data at a lower cost",
118
+ "bbox": [
119
+ 112,
120
+ 656,
121
+ 489,
122
+ 898
123
+ ],
124
+ "page_idx": 0
125
+ },
126
+ {
127
+ "type": "text",
128
+ "text": "is crowdsourcing (Peng and Dredze, 2015), which labels the data using multiple annotators.",
129
+ "bbox": [
130
+ 507,
131
+ 253,
132
+ 880,
133
+ 285
134
+ ],
135
+ "page_idx": 0
136
+ },
137
+ {
138
+ "type": "text",
139
+ "text": "The crowd-annotated datasets are always low quality for the following two reasons. First, as an exchange, crowd annotations are always nonexperts. Various annotators may have different interpretations of labeling guidelines. Moreover, they may make mistakes in the labeling process. It is hard to require a number of annotations to reach an agreement. For example, annotator 1 labels \"David and Jack\" as a PER entity, while the correct label is \"David\" and \"Jack\" under our guidelines (Table 1). Also we should label the continuous time and place as one entity (e.g., \"tomorrow at 10:00 a.m.\" and \"company (room 1003)\"). Second, due to the ambiguous word boundaries and complex composition, the NER task is more challenging compared with the text classification tasks. Annotator 3 ignores the token \"a.m.\" for the time entity and adds \"the\" as part of the place entity falsely. Also, he/she misses the person entities in the text. In this paper, we focus on building a powerful NER system based on crowd-annotated data, which is of low quality.",
140
+ "bbox": [
141
+ 507,
142
+ 287,
143
+ 884,
144
+ 625
145
+ ],
146
+ "page_idx": 0
147
+ },
148
+ {
149
+ "type": "text",
150
+ "text": "There are two main ways to utilize crowd-annotated data. One simple and important way to obtain high-quality annotations for each input instance is majority voting. As shown in Table 1, the majority voting method can not obtain the correct answers from these three annotations well. The right labels (e.g., \"David\", \"Jack\", \"tomorrow at 10:00 a.m.\", and \"company (room 1003)\") are only annotated by annotators 1 and 2 once. Another majority of work models the differences among annotators by finding the trustworthy annotators (Rodrigues et al., 2014; Nguyen et al., 2017; Yang et al., 2018). From Table 1, we can find that none of the three annotators labels the entities absolutely right. Thus, these two kinds of methods are a waste of human labor.",
151
+ "bbox": [
152
+ 507,
153
+ 627,
154
+ 882,
155
+ 883
156
+ ],
157
+ "page_idx": 0
158
+ },
159
+ {
160
+ "type": "text",
161
+ "text": "To address this problem, we translated this task into a partial label learning (PLL) problem, which",
162
+ "bbox": [
163
+ 507,
164
+ 887,
165
+ 882,
166
+ 917
167
+ ],
168
+ "page_idx": 0
169
+ },
170
+ {
171
+ "type": "page_footnote",
172
+ "text": "* Corresponding author, jie_zhou@fudan.edu.cn.",
173
+ "bbox": [
174
+ 141,
175
+ 904,
176
+ 445,
177
+ 917
178
+ ],
179
+ "page_idx": 0
180
+ },
181
+ {
182
+ "type": "page_number",
183
+ "text": "1375",
184
+ "bbox": [
185
+ 480,
186
+ 927,
187
+ 519,
188
+ 940
189
+ ],
190
+ "page_idx": 0
191
+ },
192
+ {
193
+ "type": "footer",
194
+ "text": "Findings of the Association for Computational Linguistics: ACL 2023, pages 1375-1386 July 9-14, 2023 ©2023 Association for Computational Linguistics",
195
+ "bbox": [
196
+ 228,
197
+ 945,
198
+ 768,
199
+ 972
200
+ ],
201
+ "page_idx": 0
202
+ },
203
+ {
204
+ "type": "image",
205
+ "img_path": "images/d4264851e0b04a6c2cea9b52f9406d6003b42f7c76a6b9cc23a6f0fa8cae9a2a.jpg",
206
+ "image_caption": [
207
+ "Table 1: The spans marked with blue, green, and red are time (TIME), person (PER), and place (PLACE) entities labeled by three annotators."
208
+ ],
209
+ "image_footnote": [],
210
+ "bbox": [
211
+ 134,
212
+ 80,
213
+ 863,
214
+ 192
215
+ ],
216
+ "page_idx": 1
217
+ },
218
+ {
219
+ "type": "text",
220
+ "text": "trains the model based on the dataset where each sample is assigned with a set of candidate labels (Cour et al., 2011; Wen et al., 2021). Thus, it is natural to utilize all human labor via PLL, which can be divided into two types: 1) average-based methods which consider each candidate class equally (Hüllermeier and Beringer, 2006; Zhang and Yu, 2015); 2) identification-based methods which predict the ground-truth label as a latent variable via a translation matrix to describe the scores of each candidate label (Feng and An, 2019; Yan and Guo, 2020; Feng et al., 2020). Despite extensive studies on PLL methods, there are still two challenges in our condition. One challenge (C1) is that these methods are criticized when the same candidate label occurs more than once. The general PLL is under the assumption that each candidate label is only been assigned once, while each sample may be assigned the same classes multiple times by the different annotators in our situation. Another challenge (C2) is that most of the existing studies about PLL focus on image or text classification tasks, while we focus on a more complex task, sequence labeling, where each token is asserted with a label. Thus, the token itself and its content should be considered in this task.",
221
+ "bbox": [
222
+ 112,
223
+ 256,
224
+ 489,
225
+ 674
226
+ ],
227
+ "page_idx": 1
228
+ },
229
+ {
230
+ "type": "text",
231
+ "text": "In this paper, we propose a Confidence-based Partial Label Learning (CPLL) model for crowd-annotated NER. For C1, we treat the classes' labeled number for each sample as prior confidence provided by the annotators. Also, we learn the confidence scores via an Expectation-Maximization (EM) algorithm (Dempster et al., 1977). We estimate the real conditional probability $P(Y = y|T = t, X = \\mathbf{x})$ via a true posterior estimator based on the confidence that consists of the prior and posterior confidences. For C2, we learn a token- and content-dependent confidence via a confidence estimator to consider both the token $t$ and sequence input $\\mathbf{x}$ , because the candidate labels are always token-dependent and content-dependent. In",
232
+ "bbox": [
233
+ 112,
234
+ 678,
235
+ 489,
236
+ 919
237
+ ],
238
+ "page_idx": 1
239
+ },
240
+ {
241
+ "type": "text",
242
+ "text": "fact, our model can be applied to all the sequence labeling tasks, such as word segment, part of speech, etc. We conduct a series of experiments on one real-world dataset and four synthetic datasets. The empirical results show that our model can make use of the crowd-annotated data effectively. We also explore the influence of annotation inconsistency and balance of prior and posterior confidences.",
243
+ "bbox": [
244
+ 507,
245
+ 256,
246
+ 884,
247
+ 384
248
+ ],
249
+ "page_idx": 1
250
+ },
251
+ {
252
+ "type": "text",
253
+ "text": "The main contributions of this work are listed as follows.",
254
+ "bbox": [
255
+ 507,
256
+ 385,
257
+ 882,
258
+ 417
259
+ ],
260
+ "page_idx": 1
261
+ },
262
+ {
263
+ "type": "list",
264
+ "sub_type": "text",
265
+ "list_items": [
266
+ "- To better utilize the crowd-annotated data, we propose a CPLL algorithm to incorporate the prior and posterior confidences for sequence labeling task (i.e., NER).",
267
+ "- To take the confidence scores into account, we design a true posterior estimator and confidence estimator to update the probability distribution of ground truth and token- and content-dependent confidence iteratively via the EM algorithm.",
268
+ "- Extensive experiments on both real-world and synthetic datasets show that our CPLL model outperforms the state-of-the-art baselines, which indicates that our model disambiguates the noise labels effectively."
269
+ ],
270
+ "bbox": [
271
+ 507,
272
+ 431,
273
+ 884,
274
+ 682
275
+ ],
276
+ "page_idx": 1
277
+ },
278
+ {
279
+ "type": "text",
280
+ "text": "2 Our Approach",
281
+ "text_level": 1,
282
+ "bbox": [
283
+ 507,
284
+ 709,
285
+ 672,
286
+ 726
287
+ ],
288
+ "page_idx": 1
289
+ },
290
+ {
291
+ "type": "text",
292
+ "text": "In this section, we first give the formal definition of our task. Then, we provide an overview of our proposed CPLL model. Finally, we introduce the main components contained in our model.",
293
+ "bbox": [
294
+ 507,
295
+ 736,
296
+ 882,
297
+ 800
298
+ ],
299
+ "page_idx": 1
300
+ },
301
+ {
302
+ "type": "text",
303
+ "text": "2.1 Formal Definition",
304
+ "text_level": 1,
305
+ "bbox": [
306
+ 507,
307
+ 813,
308
+ 697,
309
+ 828
310
+ ],
311
+ "page_idx": 1
312
+ },
313
+ {
314
+ "type": "text",
315
+ "text": "Given a training corpus $\\mathcal{D} = \\{\\mathbf{x}_i, (\\hat{Y}_i, A_i)\\}_{i=1}^{|\\mathcal{D}|}$ , where $\\mathbf{x} = \\{t_1, t_2, \\dots, t_{|\\mathbf{x}|}\\}, (\\hat{Y}, A) = \\{(\\hat{\\mathbf{y}}_1, \\mathbf{a}_1), (\\hat{\\mathbf{y}}_2, \\mathbf{a}_2), \\dots, (\\hat{\\mathbf{y}}_{|\\mathbf{x}|}, \\mathbf{a}_{|\\mathbf{x}|})\\}$ . Here, $\\hat{\\mathbf{y}} = \\{y_1, y_2, \\dots, y_{|\\hat{\\mathbf{y}}|}\\}$ is the candidate label set of the token $t$ and $\\mathbf{a} = [a_1, a_2, \\dots, a_{|\\hat{\\mathbf{y}}|}]$ is",
316
+ "bbox": [
317
+ 507,
318
+ 831,
319
+ 884,
320
+ 921
321
+ ],
322
+ "page_idx": 1
323
+ },
324
+ {
325
+ "type": "page_number",
326
+ "text": "1376",
327
+ "bbox": [
328
+ 480,
329
+ 927,
330
+ 521,
331
+ 940
332
+ ],
333
+ "page_idx": 1
334
+ },
335
+ {
336
+ "type": "image",
337
+ "img_path": "images/8883048ca92168c4264d1195e913e59d0163e381f88b2f76505dcfd9806d1479.jpg",
338
+ "image_caption": [
339
+ "Figure 1: The framework of our CPLL model, which consists of a true posterior estimator and confidence estimator. The true posterior estimator is used to predict the true posterior $P(Y = y|T = t, X = \\mathbf{x})$ based on the confidence score learned by the confidence estimator. The confidence estimator learns the confidence based on the prior confidence obtained from annotators and the posterior confidence learned by the model."
340
+ ],
341
+ "image_footnote": [],
342
+ "bbox": [
343
+ 115,
344
+ 80,
345
+ 884,
346
+ 360
347
+ ],
348
+ "page_idx": 2
349
+ },
350
+ {
351
+ "type": "text",
352
+ "text": "labeled times obtained from annotations. Specifically, $a$ is the labeled times of candidate label $y$ for token $t$ . $\\hat{\\mathbf{y}} \\in \\{2^{\\mathcal{Y}}\\backslash \\emptyset \\backslash \\mathcal{Y}\\}$ where $\\mathcal{V}$ is the label space and $2^{\\mathcal{V}}$ means the power set. For the rest of this paper, $y$ denotes the true label of token $t$ in text $\\mathbf{x}$ unless otherwise specified. The goal of this task is to predict the truth posterior probability $P(Y = y|T = t,X = \\mathbf{x})$ of token $t$ in text $\\mathbf{x}$ .",
353
+ "bbox": [
354
+ 112,
355
+ 450,
356
+ 487,
357
+ 580
358
+ ],
359
+ "page_idx": 2
360
+ },
361
+ {
362
+ "type": "text",
363
+ "text": "2.2 Overview",
364
+ "text_level": 1,
365
+ "bbox": [
366
+ 112,
367
+ 589,
368
+ 236,
369
+ 604
370
+ ],
371
+ "page_idx": 2
372
+ },
373
+ {
374
+ "type": "text",
375
+ "text": "In this paper, we propose a CONfidence-based partial Label Learning (CPLL) model for crowd-annotated NER (Figure 1). Particularly, we learn the true posterior $P(Y = y|T = t,X = \\mathbf{x})$ via a true posterior estimator $f$ and a confidence score $g(y;\\hat{Y},t,\\mathbf{x})$ by minimizing the following risk.",
376
+ "bbox": [
377
+ 112,
378
+ 609,
379
+ 487,
380
+ 708
381
+ ],
382
+ "page_idx": 2
383
+ },
384
+ {
385
+ "type": "equation",
386
+ "text": "\n$$\nR = \\mathbb {E} _ {p (\\mathbf {x}, \\hat {\\mathbf {y}})} \\left[ \\sum_ {t \\in \\mathbf {x}} \\sum_ {y} \\underbrace {g (y ; \\hat {\\mathbf {y}} , t , \\mathbf {x})} _ {\\text {C o n f i d e n c e}} * \\underbrace {\\mathcal {L} (f (y ; t , \\mathbf {x}) , y)} _ {\\text {T r u e p o s t e r i o r}} \\right] \\tag {1}\n$$\n",
387
+ "text_format": "latex",
388
+ "bbox": [
389
+ 126,
390
+ 715,
391
+ 487,
392
+ 765
393
+ ],
394
+ "page_idx": 2
395
+ },
396
+ {
397
+ "type": "text",
398
+ "text": "where the classifier $f(y; t, \\mathbf{x})$ is used to predict $P(Y = y | T = t, X = \\mathbf{x})$ and $\\mathcal{L}$ is the loss. Particularly, we rely on the Expectation-Maximization algorithm (Dempster et al., 1977) to find the maximum likelihood parameters of CPLL by regarding the ground truth as a latent variable. In the M-step, we train a naive classifier $f$ to predict the true posterior $P(Y = y | T = t, X = \\mathbf{x})$ via a true posterior estimator (Section 2.3). In the E-step, we update",
399
+ "bbox": [
400
+ 112,
401
+ 774,
402
+ 489,
403
+ 919
404
+ ],
405
+ "page_idx": 2
406
+ },
407
+ {
408
+ "type": "text",
409
+ "text": "the confidence score via a confidence estimator (Section 2.4), which consists of the prior confi dences (calculated from annotations) and posterior confidences (learned by model).",
410
+ "bbox": [
411
+ 507,
412
+ 450,
413
+ 882,
414
+ 514
415
+ ],
416
+ "page_idx": 2
417
+ },
418
+ {
419
+ "type": "text",
420
+ "text": "2.3 True Posterior Estimator",
421
+ "text_level": 1,
422
+ "bbox": [
423
+ 507,
424
+ 526,
425
+ 754,
426
+ 539
427
+ ],
428
+ "page_idx": 2
429
+ },
430
+ {
431
+ "type": "text",
432
+ "text": "First, we train a naive classifier as our true posterior estimator $f$ to infer the true posterior $P(Y = y|T = t,X = \\mathbf{x})$ . To model the sequence, we adopt a pre-trained language model (BERT (Ken-ton and Toutanova, 2019)) $\\mathcal{M}$ to learn a content-aware token representation. Specifically, we input the sequence $\\mathbf{x} = \\{t_1,t_2,\\dots ,t_{|\\mathbf{x}|}\\}$ into $\\mathcal{M}$ to obtain the sequence representations,",
433
+ "bbox": [
434
+ 507,
435
+ 546,
436
+ 882,
437
+ 675
438
+ ],
439
+ "page_idx": 2
440
+ },
441
+ {
442
+ "type": "equation",
443
+ "text": "\n$$\nH = \\mathcal {M} (\\mathbf {x}, \\theta_ {\\mathcal {M}}) \\tag {2}\n$$\n",
444
+ "text_format": "latex",
445
+ "bbox": [
446
+ 631,
447
+ 688,
448
+ 882,
449
+ 703
450
+ ],
451
+ "page_idx": 2
452
+ },
453
+ {
454
+ "type": "text",
455
+ "text": "where $\\theta_{\\mathcal{M}}$ is the parameters of $\\mathcal{M}$ , $H = [h_1, h_2, \\dots, h_{|\\mathbf{x}|}]$ . $h$ is token $t$ 's content-aware representation.",
456
+ "bbox": [
457
+ 507,
458
+ 715,
459
+ 882,
460
+ 762
461
+ ],
462
+ "page_idx": 2
463
+ },
464
+ {
465
+ "type": "text",
466
+ "text": "Then, we utilize a fully connected layer (FC) to predict the probability distribution,",
467
+ "bbox": [
468
+ 507,
469
+ 765,
470
+ 880,
471
+ 796
472
+ ],
473
+ "page_idx": 2
474
+ },
475
+ {
476
+ "type": "equation",
477
+ "text": "\n$$\nf (y; t, \\mathbf {x}) = \\sigma (W * h + b) \\tag {3}\n$$\n",
478
+ "text_format": "latex",
479
+ "bbox": [
480
+ 594,
481
+ 809,
482
+ 882,
483
+ 826
484
+ ],
485
+ "page_idx": 2
486
+ },
487
+ {
488
+ "type": "text",
489
+ "text": "where $\\sigma$ is a sigmoid function, $\\theta_{FC} = \\{W,b\\}$ is the learnable parameters of FC. We regard $\\theta = \\{\\theta_{\\mathcal{M}},\\theta_{FC}\\}$ as a parameter set of true posterior estimator $f$ . Negative learning (Kim et al., 2019) is adopted, which not only considers \"the token",
490
+ "bbox": [
491
+ 507,
492
+ 838,
493
+ 882,
494
+ 917
495
+ ],
496
+ "page_idx": 2
497
+ },
498
+ {
499
+ "type": "page_number",
500
+ "text": "1377",
501
+ "bbox": [
502
+ 480,
503
+ 927,
504
+ 519,
505
+ 940
506
+ ],
507
+ "page_idx": 2
508
+ },
509
+ {
510
+ "type": "text",
511
+ "text": "belongs to positive label (candidate label $y \\in \\hat{\\mathbf{y}}$ ) but also \"the token does not belong to negative label (its complementary label $y \\notin \\hat{\\mathbf{y}}$ )\". The loss function is computed,",
512
+ "bbox": [
513
+ 112,
514
+ 84,
515
+ 487,
516
+ 149
517
+ ],
518
+ "page_idx": 3
519
+ },
520
+ {
521
+ "type": "equation",
522
+ "text": "\n$$\n\\mathcal {L} (f (y; t, \\mathbf {x}), y) = \\left\\{ \\begin{array}{l l} - \\log (f (y; t, \\mathbf {x})), & y \\in \\hat {\\mathbf {y}} \\\\ - \\log (1 - f (y; t, \\mathbf {x})), & y \\notin \\hat {\\mathbf {y}} \\end{array} \\right. \\tag {4}\n$$\n",
523
+ "text_format": "latex",
524
+ "bbox": [
525
+ 122,
526
+ 159,
527
+ 489,
528
+ 187
529
+ ],
530
+ "page_idx": 3
531
+ },
532
+ {
533
+ "type": "text",
534
+ "text": "Finally, we optimize the empirical risk by integrating confidence $g(y; \\hat{\\mathbf{y}}, t, \\mathbf{x})$ with the loss function (Equation 1). We will introduce the confidence $g(y; \\hat{\\mathbf{y}}, t, \\mathbf{x})$ in detail below.",
535
+ "bbox": [
536
+ 112,
537
+ 199,
538
+ 489,
539
+ 263
540
+ ],
541
+ "page_idx": 3
542
+ },
543
+ {
544
+ "type": "text",
545
+ "text": "2.4 Confidence Estimator",
546
+ "text_level": 1,
547
+ "bbox": [
548
+ 112,
549
+ 274,
550
+ 332,
551
+ 288
552
+ ],
553
+ "page_idx": 3
554
+ },
555
+ {
556
+ "type": "text",
557
+ "text": "The confidence estimator is used to learn the confidence scores $g(y; \\hat{\\mathbf{y}}, t, \\mathbf{x})$ , which represents the confidence of label $y$ given the token $t$ , text sequence $\\mathbf{x}$ , and partial label $\\hat{\\mathbf{y}}$ .",
558
+ "bbox": [
559
+ 112,
560
+ 294,
561
+ 489,
562
+ 359
563
+ ],
564
+ "page_idx": 3
565
+ },
566
+ {
567
+ "type": "equation",
568
+ "text": "\n$$\ng (y; \\hat {\\mathbf {y}}, t, \\mathbf {x}) = \\alpha * c _ {y; t, \\mathbf {x}} ^ {A} + (1 - \\alpha) * c _ {y; t, \\mathbf {x}} ^ {M} \\quad (5)\n$$\n",
569
+ "text_format": "latex",
570
+ "bbox": [
571
+ 129,
572
+ 370,
573
+ 487,
574
+ 391
575
+ ],
576
+ "page_idx": 3
577
+ },
578
+ {
579
+ "type": "text",
580
+ "text": "where the confidence score $c_{y;t,\\mathbf{x}}^{M}$ is learned by model and $c_{y;t,\\mathbf{x}}^{A}$ is given by annotators. $\\alpha$ is a hyper-parameter used to balance these two terms. The annotators will affect the quality of the datasets and we can calculate the prior confidence based on the labeled times of each class. However, prior confidence is biased since the annotators we selected have biases. To address this problem, we also let the model learn the posterior confidence to reduce the biases in prior confidence.",
581
+ "bbox": [
582
+ 112,
583
+ 401,
584
+ 487,
585
+ 565
586
+ ],
587
+ "page_idx": 3
588
+ },
589
+ {
590
+ "type": "text",
591
+ "text": "Posterior Confidence We update posterior confidence $c_{y;t,\\mathbf{x}}^{M}$ based on true posterior distribution $P(Y = y|T = t, X = \\mathbf{x})$ estimated by true posterior estimator $f(y; t, \\mathbf{x})$ .",
592
+ "bbox": [
593
+ 112,
594
+ 574,
595
+ 489,
596
+ 639
597
+ ],
598
+ "page_idx": 3
599
+ },
600
+ {
601
+ "type": "equation",
602
+ "text": "\n$$\nc _ {y; t, \\mathbf {x}} ^ {M} = \\left\\{ \\begin{array}{l l} \\frac {\\exp (P (Y = y | T = t , X = \\mathbf {x}))}{\\sum_ {\\hat {y} \\in \\hat {\\mathbf {y}}} \\exp (P (Y = \\hat {y} | T = t , X = \\mathbf {x}))}, & y \\in \\hat {\\mathbf {y}} \\\\ \\frac {\\exp (P (Y = y | T = t , X = \\mathbf {x}))}{\\sum_ {\\hat {y} \\notin \\hat {\\mathbf {y}}} \\exp (P (Y = \\hat {y} | T = t , X = \\mathbf {x}))}, & y \\notin \\hat {\\mathbf {y}} \\end{array} \\right. \\tag {6}\n$$\n",
603
+ "text_format": "latex",
604
+ "bbox": [
605
+ 119,
606
+ 649,
607
+ 485,
608
+ 712
609
+ ],
610
+ "page_idx": 3
611
+ },
612
+ {
613
+ "type": "text",
614
+ "text": "We calculate the confidence score for positive and negative labels independently.",
615
+ "bbox": [
616
+ 112,
617
+ 713,
618
+ 485,
619
+ 745
620
+ ],
621
+ "page_idx": 3
622
+ },
623
+ {
624
+ "type": "text",
625
+ "text": "Prior Confidence We translate the labeled times a obtained from annotation into prior confidence $c_{y;t,\\mathbf{x}}^{A}$",
626
+ "bbox": [
627
+ 112,
628
+ 753,
629
+ 485,
630
+ 806
631
+ ],
632
+ "page_idx": 3
633
+ },
634
+ {
635
+ "type": "equation",
636
+ "text": "\n$$\nc _ {y; t, \\mathbf {x}} ^ {A} = \\left\\{ \\begin{array}{l l} \\frac {\\exp (a)}{\\sum_ {\\tilde {a} \\in \\mathbf {a}} e x p (\\tilde {a})}, & y \\in \\hat {\\mathbf {y}} \\\\ 0, & y \\notin \\hat {\\mathbf {y}} \\end{array} \\right. \\tag {7}\n$$\n",
637
+ "text_format": "latex",
638
+ "bbox": [
639
+ 176,
640
+ 816,
641
+ 487,
642
+ 858
643
+ ],
644
+ "page_idx": 3
645
+ },
646
+ {
647
+ "type": "text",
648
+ "text": "Note that both $c_{y;t,\\mathbf{x}}^{M}$ and $c_{y;t,\\mathbf{x}}^{A}$ are token and content dependent. The annotations are always affected by both the token self and the content of",
649
+ "bbox": [
650
+ 112,
651
+ 870,
652
+ 489,
653
+ 917
654
+ ],
655
+ "page_idx": 3
656
+ },
657
+ {
658
+ "type": "table",
659
+ "img_path": "images/30e168f396e8a191be271b8790341e4161abf5343188f1ff9738289bd93f8d36.jpg",
660
+ "table_caption": [],
661
+ "table_footnote": [
662
+ "Table 2: The statistical information of real-world dataset. #Sample means the number of samples in the corresponding dataset. #TIME, #PLACE and #PERSON represent the number of time, place, and person entities."
663
+ ],
664
+ "table_body": "<table><tr><td></td><td>#Sample</td><td>#TIME</td><td>#PLACE</td><td>#PERSON</td></tr><tr><td>Training</td><td>1000</td><td>6934</td><td>958</td><td>3518</td></tr><tr><td>Dev</td><td>440</td><td>955</td><td>147</td><td>351</td></tr><tr><td>Test</td><td>441</td><td>1015</td><td>171</td><td>356</td></tr></table>",
665
+ "bbox": [
666
+ 510,
667
+ 80,
668
+ 882,
669
+ 152
670
+ ],
671
+ "page_idx": 3
672
+ },
673
+ {
674
+ "type": "text",
675
+ "text": "the token. Thus, we model the confidence by considering both the token and content. Finally, we compute the final confidence score $g(y; \\hat{\\mathbf{y}}, t, \\mathbf{x})$ via Equation 5, which considers both biases from annotators and models.",
676
+ "bbox": [
677
+ 507,
678
+ 244,
679
+ 882,
680
+ 324
681
+ ],
682
+ "page_idx": 3
683
+ },
684
+ {
685
+ "type": "text",
686
+ "text": "We update the parameters $\\theta$ and confidence score in the M step and E step of the EM algorithm. Specifically, we perform the true posterior estimator and confidence estimator iteratively. The initialization of $c_{y;t,\\mathbf{x}}^{M}$ is $\\frac{1}{|\\hat{\\mathbf{y}}|}$ for $y\\in \\hat{\\mathbf{y}}$ and $\\frac{1}{|\\mathcal{V}| - |\\hat{\\mathbf{y}}|}$ for $y\\notin \\hat{\\mathbf{y}}$ .",
687
+ "bbox": [
688
+ 507,
689
+ 326,
690
+ 882,
691
+ 424
692
+ ],
693
+ "page_idx": 3
694
+ },
695
+ {
696
+ "type": "text",
697
+ "text": "3 Experimental Setups",
698
+ "text_level": 1,
699
+ "bbox": [
700
+ 507,
701
+ 438,
702
+ 724,
703
+ 455
704
+ ],
705
+ "page_idx": 3
706
+ },
707
+ {
708
+ "type": "text",
709
+ "text": "In this section, we first introduce one real-world and four synthetic datasets we adopted to evaluate the performance (Section 3.1). Then, we list the selected popular baselines to investigate the validity of our CPLL model (Section 3.2). Finally, we present the implementation details and metrics to replicate the experiment easily (Section 3.3).",
710
+ "bbox": [
711
+ 507,
712
+ 464,
713
+ 882,
714
+ 577
715
+ ],
716
+ "page_idx": 3
717
+ },
718
+ {
719
+ "type": "text",
720
+ "text": "3.1 Datasets",
721
+ "text_level": 1,
722
+ "bbox": [
723
+ 507,
724
+ 590,
725
+ 623,
726
+ 604
727
+ ],
728
+ "page_idx": 3
729
+ },
730
+ {
731
+ "type": "text",
732
+ "text": "Real-World Dataset. To build the real-world dataset, we ask the annotators to label the person, place, and time in the text independently. Each sample is assigned to three annotators with guidelines and several examples. To be specific, we ask three students to label 1000 samples as the training set. The average Kappa value among the annotators is 0.215, indicating that the crowd annotators have low agreement on identifying entities in this data. In order to evaluate the system performances, we create a set of the corpus with gold annotations. Concretely, we randomly select 881 sentences from the raw dataset and let two experts generate the gold annotations. Among them, we use 440 sentences as the development set and the remaining 441 as the test set. Table 2 shows the statistical information of this dataset.",
733
+ "bbox": [
734
+ 507,
735
+ 612,
736
+ 882,
737
+ 884
738
+ ],
739
+ "page_idx": 3
740
+ },
741
+ {
742
+ "type": "text",
743
+ "text": "Synthetic Datasets. Inspired by (Rodrigues et al., 2014), we build synthetic datasets by adding",
744
+ "bbox": [
745
+ 507,
746
+ 887,
747
+ 882,
748
+ 917
749
+ ],
750
+ "page_idx": 3
751
+ },
752
+ {
753
+ "type": "page_number",
754
+ "text": "1378",
755
+ "bbox": [
756
+ 480,
757
+ 927,
758
+ 519,
759
+ 940
760
+ ],
761
+ "page_idx": 3
762
+ },
763
+ {
764
+ "type": "table",
765
+ "img_path": "images/7e391eceae28d8f8163f3eff6dc24d43e2c0097689c1126b819802a30acba188.jpg",
766
+ "table_caption": [],
767
+ "table_footnote": [],
768
+ "table_body": "<table><tr><td></td><td>#Original</td><td>r</td><td>BI</td><td>#c</td><td>Error Percent</td></tr><tr><td rowspan=\"4\">Weibo</td><td rowspan=\"4\">4951</td><td>5%</td><td>35</td><td>134</td><td>3.4%</td></tr><tr><td>10%</td><td>143</td><td>546</td><td>13.9%</td></tr><tr><td>20%</td><td>494</td><td>1706</td><td>44.4%</td></tr><tr><td>25%</td><td>615</td><td>2411</td><td>61.0%</td></tr><tr><td rowspan=\"4\">Resume</td><td rowspan=\"4\">79014</td><td>5%</td><td>244</td><td>2011</td><td>2.8%</td></tr><tr><td>10%</td><td>920</td><td>7361</td><td>10.4%</td></tr><tr><td>20%</td><td>2979</td><td>25408</td><td>35.9%</td></tr><tr><td>25%</td><td>4145</td><td>37585</td><td>52.8%</td></tr><tr><td rowspan=\"4\">Ontonotes</td><td rowspan=\"4\">41203</td><td>5%</td><td>295</td><td>1246</td><td>3.7%</td></tr><tr><td>10%</td><td>978</td><td>4368</td><td>12.9%</td></tr><tr><td>20%</td><td>3151</td><td>14849</td><td>43.6%</td></tr><tr><td>25%</td><td>4420</td><td>20542</td><td>60.5%</td></tr><tr><td rowspan=\"4\">MSRA</td><td rowspan=\"4\">241809</td><td>5%</td><td>1439</td><td>6869</td><td>3.4%</td></tr><tr><td>10%</td><td>5115</td><td>26343</td><td>13.0%</td></tr><tr><td>20%</td><td>16729</td><td>86549</td><td>42.0%</td></tr><tr><td>25%</td><td>23163</td><td>120707</td><td>59.4%</td></tr></table>",
769
+ "bbox": [
770
+ 124,
771
+ 80,
772
+ 480,
773
+ 303
774
+ ],
775
+ "page_idx": 4
776
+ },
777
+ {
778
+ "type": "text",
779
+ "text": "Table 3: The statistical information of synthetic datasets. #Original means the number of the tokens labeled as an entity (not O) in the original dataset. BI/C means the number of tokens that have a wrong BI/Category label but the right Category/BI label. Percent $=$ (BI+C)/#Original.",
780
+ "bbox": [
781
+ 112,
782
+ 311,
783
+ 489,
784
+ 398
785
+ ],
786
+ "page_idx": 4
787
+ },
788
+ {
789
+ "type": "text",
790
+ "text": "noise on four typical NER datasets: MSRA (Levow, 2006), Weibo (Peng and Dredze, 2015), Ontonotes 4.0 (Weischedel et al., 2011) and Resume (Zhang and Yang, 2018). To simulate a real noise situation, we add noise to the original datasets using four rules: 1) BE (Bound Error) that adds or deletes some tokens of the entity to destroy the bound (change \"room 1003\" to \"(room 1003\"); 2) ME (Missing Error) that removes the entity from the label (\"David\" is not labeled); 3) CE (Category Error) that changes the category of the entity (change \"Location\" to \"Organization\"); 4) SE (Segmentation Error) that splits the entity into two entities (change \"tomorrow at 10:00 am\" to \"tomorrow\" and \"at 10:00 am\"). We run each rule randomly with a perturbation rate $r$ , which is set as $10\\%$ in the experiments. Additionally, we explore the influence of annotation inconsistency with different rates. Table 3 shows statistical information of these datasets based on token-level majority voting. We can find that a large number of entities are perturbed by our rules. For example, more than $40\\%$ tokens labeled as entities are perturbed with a perturbation rate $r$ of $20\\%$ .",
791
+ "bbox": [
792
+ 110,
793
+ 426,
794
+ 489,
795
+ 813
796
+ ],
797
+ "page_idx": 4
798
+ },
799
+ {
800
+ "type": "text",
801
+ "text": "3.2 Baselines",
802
+ "text_level": 1,
803
+ "bbox": [
804
+ 112,
805
+ 829,
806
+ 235,
807
+ 844
808
+ ],
809
+ "page_idx": 4
810
+ },
811
+ {
812
+ "type": "text",
813
+ "text": "To verify the effectiveness of our CPLL model, we compare it with several strong and typical baselines, which can be categorized into three groups: voting-based models, partial label learning-based models,",
814
+ "bbox": [
815
+ 112,
816
+ 854,
817
+ 489,
818
+ 917
819
+ ],
820
+ "page_idx": 4
821
+ },
822
+ {
823
+ "type": "text",
824
+ "text": "and annotator-based models.",
825
+ "bbox": [
826
+ 509,
827
+ 84,
828
+ 724,
829
+ 98
830
+ ],
831
+ "page_idx": 4
832
+ },
833
+ {
834
+ "type": "text",
835
+ "text": "- Voting-based models. We select two voting-based models, entity-level and token-level voting models. The entity-level voting model obtains the ground truth by voting at the entity level. The token-level voting model calculates the ground truth by voting at the token level. A BERT-based sequence labeling model (Kenton and Toutanova, 2019) is trained based on the ground truth calculated by voting.",
836
+ "bbox": [
837
+ 509,
838
+ 112,
839
+ 885,
840
+ 256
841
+ ],
842
+ "page_idx": 4
843
+ },
844
+ {
845
+ "type": "text",
846
+ "text": "- Partial label learning-based models. We adopt two classic PLL baselines to utilize the crowd-annotated data with multiple candidate labels. PRODEN-mlp (Lv et al., 2020) adopts a classifier-consistent risk estimator with a progressive identification method for PLL. Wen et al. (2021) propose a Leveraged Weighted (LW) loss for PLL to take the partial and non-partial labels into account, which is proved to be risk consistency. It achieved state-of-the-art results on various computer version tasks. We implement the models by translating the official codes to our NER task.",
847
+ "bbox": [
848
+ 509,
849
+ 267,
850
+ 885,
851
+ 475
852
+ ],
853
+ "page_idx": 4
854
+ },
855
+ {
856
+ "type": "text",
857
+ "text": "- Annotator-based models. After seeing researchers achieve great success in fully-supervised learning, we are easily going to think about how to gain fully-supervised data from crowd-annotated data when we use crowdsourcing. Seqcrowd (Nguyen et al., 2017) uses a crowd component, a Hidden Markov Model (HMM) learned by the Expectation-Maximization algorithm, to transform crowd-annotated data into fully-supervised data instead of simply voting at token-level or entity-level. When we get the ground truth calculated by this crowd component, we can adopt some efficient fully-supervised learning method to finish the corresponding task.",
858
+ "bbox": [
859
+ 509,
860
+ 488,
861
+ 885,
862
+ 712
863
+ ],
864
+ "page_idx": 4
865
+ },
866
+ {
867
+ "type": "text",
868
+ "text": "3.3 Implementation Details and Metrics",
869
+ "text_level": 1,
870
+ "bbox": [
871
+ 509,
872
+ 734,
873
+ 838,
874
+ 750
875
+ ],
876
+ "page_idx": 4
877
+ },
878
+ {
879
+ "type": "text",
880
+ "text": "We adopt a PyTorch (Paszke et al., 2019) framework Transformers to implement our model based on GPU GTX TITAN X. Chinese-ROberta-wwm-ext model (Cui et al., 2019) is used for our true posterior estimator. We utilize Adam optimizer (Kingma and Ba, 2014) to update our model and set different learning rates for the BERT module (0.00002) and the rest module (0.002). The max sequence length",
881
+ "bbox": [
882
+ 507,
883
+ 755,
884
+ 884,
885
+ 883
886
+ ],
887
+ "page_idx": 4
888
+ },
889
+ {
890
+ "type": "page_footnote",
891
+ "text": "<sup>1</sup>https://huggingface.co/hfl/chinese-roberta-wwm-ext/tree/main",
892
+ "bbox": [
893
+ 509,
894
+ 892,
895
+ 843,
896
+ 917
897
+ ],
898
+ "page_idx": 4
899
+ },
900
+ {
901
+ "type": "page_number",
902
+ "text": "1379",
903
+ "bbox": [
904
+ 482,
905
+ 927,
906
+ 519,
907
+ 940
908
+ ],
909
+ "page_idx": 4
910
+ },
911
+ {
912
+ "type": "table",
913
+ "img_path": "images/24e74cf2e0a4a2ccbf2512b352b14c89633f884c1be367704fd5dba58c097b26.jpg",
914
+ "table_caption": [],
915
+ "table_footnote": [],
916
+ "table_body": "<table><tr><td colspan=\"2\"></td><td>Real-World Dev</td><td>Test</td><td>Ontonotes Dev</td><td>Test</td><td>Weibo Dev</td><td>Test</td><td>Resume Dev</td><td>Test</td><td>MSRA Test</td></tr><tr><td>Ours</td><td>CPLL</td><td>90.37</td><td>90.60</td><td>79.39</td><td>81.47</td><td>69.72</td><td>68.23</td><td>96.57</td><td>96.07</td><td>95.42</td></tr><tr><td rowspan=\"2\">Voting</td><td>Token-level</td><td>89.45</td><td>90.40</td><td>78.17</td><td>80.12</td><td>67.79</td><td>63.81</td><td>95.81</td><td>95.39</td><td>94.68</td></tr><tr><td>Entity-level</td><td>89.79</td><td>90.04</td><td>78.02</td><td>79.30</td><td>65.59</td><td>59.34</td><td>95.64</td><td>94.88</td><td>94.78</td></tr><tr><td rowspan=\"2\">PLL</td><td>PRODEN-mlp</td><td>87.39</td><td>87.90</td><td>73.04</td><td>75.36</td><td>66.37</td><td>61.85</td><td>93.90</td><td>94.90</td><td>92.46</td></tr><tr><td>LW loss</td><td>88.80</td><td>89.83</td><td>79.07</td><td>80.45</td><td>69.63</td><td>64.26</td><td>96.37</td><td>95.64</td><td>95.35</td></tr><tr><td>Annotator</td><td>Seqcrowd</td><td>-</td><td>-</td><td>62.80</td><td>65.34</td><td>47.56</td><td>41.49</td><td>92.73</td><td>93.30</td><td>91.90</td></tr><tr><td>Upper Bound</td><td>Clean data</td><td>-</td><td>-</td><td>79.74</td><td>81.47</td><td>70.83</td><td>68.87</td><td>96.64</td><td>96.31</td><td>95.53</td></tr></table>",
917
+ "bbox": [
918
+ 119,
919
+ 80,
920
+ 884,
921
+ 225
922
+ ],
923
+ "page_idx": 5
924
+ },
925
+ {
926
+ "type": "table",
927
+ "img_path": "images/144a5634be001d11b04405747df44ab8fa64d531692ea40bd08f7958023f1796.jpg",
928
+ "table_caption": [
929
+ "Table 4: The performance of our model and baselines in terms of F1. For real-world dataset, we do not report the results on clean data and Seqcrowd since we do not have ground truth for the training set."
930
+ ],
931
+ "table_footnote": [],
932
+ "table_body": "<table><tr><td></td><td colspan=\"2\">Real-World</td><td colspan=\"2\">Ontonotes</td><td colspan=\"2\">Weibo</td><td colspan=\"2\">Resume</td><td>MSRA</td></tr><tr><td></td><td>Dev</td><td>Test</td><td>Dev</td><td>Test</td><td>Dev</td><td>Test</td><td>Dev</td><td>Test</td><td>Test</td></tr><tr><td>CPLL</td><td>90.37</td><td>90.60</td><td>79.39</td><td>81.47</td><td>69.72</td><td>68.23</td><td>96.57</td><td>96.07</td><td>95.42</td></tr><tr><td>w/o Posterior Confidence</td><td>89.51</td><td>90.08</td><td>79.11</td><td>80.42</td><td>68.83</td><td>65.84</td><td>95.74</td><td>95.38</td><td>94.79</td></tr><tr><td>w/o Prior Confidence</td><td>90.60</td><td>90.94</td><td>79.68</td><td>80.87</td><td>70.57</td><td>64.90</td><td>96.21</td><td>95.70</td><td>95.20</td></tr><tr><td>w/o Both</td><td>86.73</td><td>86.32</td><td>78.66</td><td>80.22</td><td>67.33</td><td>61.59</td><td>95.72</td><td>95.23</td><td>94.61</td></tr></table>",
933
+ "bbox": [
934
+ 119,
935
+ 275,
936
+ 884,
937
+ 379
938
+ ],
939
+ "page_idx": 5
940
+ },
941
+ {
942
+ "type": "text",
943
+ "text": "Table 5: The performance of ablation studies.",
944
+ "bbox": [
945
+ 342,
946
+ 388,
947
+ 653,
948
+ 403
949
+ ],
950
+ "page_idx": 5
951
+ },
952
+ {
953
+ "type": "text",
954
+ "text": "is 512, the batch size is 8 and the dropout rate is 0.1. We search the best $\\alpha$ from 0.1 to 0.9 with step 0.1 using the development set. All the baselines use the same settings hyper-parameters mentioned in their paper. Our source code will be available soon after this paper is accepted.",
955
+ "bbox": [
956
+ 112,
957
+ 428,
958
+ 487,
959
+ 524
960
+ ],
961
+ "page_idx": 5
962
+ },
963
+ {
964
+ "type": "text",
965
+ "text": "To measure the performance of the models, we adopt Macro-F1 as the metric, which is widely used for NER (Yadav and Bethard, 2018). In particular, we evaluate the performance on the span level, where the answer will be considered correct only when the entire span is matched.",
966
+ "bbox": [
967
+ 112,
968
+ 524,
969
+ 489,
970
+ 621
971
+ ],
972
+ "page_idx": 5
973
+ },
974
+ {
975
+ "type": "text",
976
+ "text": "4 Experimental Results",
977
+ "text_level": 1,
978
+ "bbox": [
979
+ 112,
980
+ 634,
981
+ 334,
982
+ 651
983
+ ],
984
+ "page_idx": 5
985
+ },
986
+ {
987
+ "type": "text",
988
+ "text": "In this section, we conduct a series of experiments to investigate the effectiveness of the proposed CPLL model. Specifically, we compare our model with three kinds of strong baselines (Section 4.1) and do ablation studies to explore the influence of the key parts contained in CPLL (Section 4.2). Also, we investigate the influence of annotation inconsistency (Section 4.3) and hyper-parameter $\\alpha$ , which controls the balance of posterior confidence and prior confidence (Section 4.4).",
989
+ "bbox": [
990
+ 112,
991
+ 659,
992
+ 489,
993
+ 821
994
+ ],
995
+ "page_idx": 5
996
+ },
997
+ {
998
+ "type": "text",
999
+ "text": "4.1 Main Results",
1000
+ "text_level": 1,
1001
+ "bbox": [
1002
+ 112,
1003
+ 833,
1004
+ 265,
1005
+ 848
1006
+ ],
1007
+ "page_idx": 5
1008
+ },
1009
+ {
1010
+ "type": "text",
1011
+ "text": "To evaluate the performance of our model, we present the results of compared baselines and our CPLL model (See Table 4). First, we can find that our model outperforms all the baselines on both",
1012
+ "bbox": [
1013
+ 112,
1014
+ 854,
1015
+ 487,
1016
+ 917
1017
+ ],
1018
+ "page_idx": 5
1019
+ },
1020
+ {
1021
+ "type": "text",
1022
+ "text": "the real-world and synthetic datasets. The labels obtained by voting-based methods (e.g., Token-level voting and entity-level voting) always contain much noise because of the large labeling space and the complexity of this task. For PLL-based models (e.g., PRODEN-mlp and LW loss), they ignore the labeled times by the annotators. Furthermore, annotator-based methods (e.g., Seqcrowd) aim to find the trustworthy label or annotator. Note that Seqcrow does not work on Weibo and performs poorly on Ontonotes. It is because Seqcrow cannot solve the case of small sizes or large noise of datasets, which is also verified in Section 2. All these methods cause information loss which affects the performance of the models largely. Our CPLL model makes use of the crowd-annotated data by translating this task into a PLL task to integrate confidence. Second, our CPLL model can reduce the influence of noise effectively. From the results, we observe that CPLL obtains comparable results with the model trained on the clean data. Our confidence estimator can learn the bias generated by annotations effectively via the posterior and prior confidence.",
1023
+ "bbox": [
1024
+ 507,
1025
+ 428,
1026
+ 884,
1027
+ 814
1028
+ ],
1029
+ "page_idx": 5
1030
+ },
1031
+ {
1032
+ "type": "text",
1033
+ "text": "4.2 Ablation Studies",
1034
+ "text_level": 1,
1035
+ "bbox": [
1036
+ 507,
1037
+ 831,
1038
+ 687,
1039
+ 845
1040
+ ],
1041
+ "page_idx": 5
1042
+ },
1043
+ {
1044
+ "type": "text",
1045
+ "text": "To evaluate the effectiveness of each part contained in our model, we do ablation studies (See Table 5). We remove posterior confidence (w/o Posterior Confidence), prior confidence (w/o Prior Confi",
1046
+ "bbox": [
1047
+ 507,
1048
+ 854,
1049
+ 882,
1050
+ 917
1051
+ ],
1052
+ "page_idx": 5
1053
+ },
1054
+ {
1055
+ "type": "page_number",
1056
+ "text": "1380",
1057
+ "bbox": [
1058
+ 482,
1059
+ 928,
1060
+ 521,
1061
+ 940
1062
+ ],
1063
+ "page_idx": 5
1064
+ },
1065
+ {
1066
+ "type": "image",
1067
+ "img_path": "images/fc6c21b82d39df1c9513501038e68003c1dde5993303bfbf65c79ea902ae6f23.jpg",
1068
+ "image_caption": [
1069
+ "(a) Weibo"
1070
+ ],
1071
+ "image_footnote": [],
1072
+ "bbox": [
1073
+ 132,
1074
+ 73,
1075
+ 374,
1076
+ 212
1077
+ ],
1078
+ "page_idx": 6
1079
+ },
1080
+ {
1081
+ "type": "image",
1082
+ "img_path": "images/4d1c12532177fede0a39bbde6c689a0e890a6ae38bc168e95a1e87f18a38ce25.jpg",
1083
+ "image_caption": [
1084
+ "(b)Resume"
1085
+ ],
1086
+ "image_footnote": [],
1087
+ "bbox": [
1088
+ 376,
1089
+ 74,
1090
+ 615,
1091
+ 212
1092
+ ],
1093
+ "page_idx": 6
1094
+ },
1095
+ {
1096
+ "type": "image",
1097
+ "img_path": "images/50d996c24e99c262018cf9a1ec71af65a0ae987406b7de942bf12b0345610344.jpg",
1098
+ "image_caption": [
1099
+ "(c) Ontonotes",
1100
+ "Figure 2: The influence of annotation inconsistency."
1101
+ ],
1102
+ "image_footnote": [],
1103
+ "bbox": [
1104
+ 618,
1105
+ 74,
1106
+ 860,
1107
+ 212
1108
+ ],
1109
+ "page_idx": 6
1110
+ },
1111
+ {
1112
+ "type": "text",
1113
+ "text": "dence), and both of them (w/o Both) from CPLL model. For w/o Both, we remove the confidence estimator by setting the confidences as $1 / |\\hat{\\mathbf{y}}|$ for partial labels and 0 for non-partial labels.",
1114
+ "bbox": [
1115
+ 112,
1116
+ 273,
1117
+ 487,
1118
+ 337
1119
+ ],
1120
+ "page_idx": 6
1121
+ },
1122
+ {
1123
+ "type": "text",
1124
+ "text": "From the results, we find the following observations. 1) Confidence estimator can learn the annotation bias effectively. Removing it (w/o Both) reduces more than 4 points in terms of F1 on the test sets over real-world and Weibo datasets. 2) Both posterior confidence and prior confidence are useful for this task. Obviously, prior confidence is vital to leverage the labeled confidence given by annotators. However, prior confidence may exist bias since the annotators are limited. Thus, the posterior confidence learned by the model is also crucial for partial label learning to rectify the prediction.",
1125
+ "bbox": [
1126
+ 112,
1127
+ 338,
1128
+ 489,
1129
+ 533
1130
+ ],
1131
+ "page_idx": 6
1132
+ },
1133
+ {
1134
+ "type": "text",
1135
+ "text": "4.3 Influence of Annotation Inconsistency",
1136
+ "text_level": 1,
1137
+ "bbox": [
1138
+ 112,
1139
+ 543,
1140
+ 455,
1141
+ 558
1142
+ ],
1143
+ "page_idx": 6
1144
+ },
1145
+ {
1146
+ "type": "text",
1147
+ "text": "We also explore the influence of annotation inconsistency on synthetic datasets with various perturbation rates. Annotation inconsistency is used to model the label quality of crowd-sourcing. The bigger the perturbation rate, the worse the quality of the annotation. We report the results with a rate from $5\\%$ to $25\\%$ with step $5\\%$ over Weibo, Resume, and Ontonotes datasets (Figure 2).",
1148
+ "bbox": [
1149
+ 112,
1150
+ 564,
1151
+ 489,
1152
+ 692
1153
+ ],
1154
+ "page_idx": 6
1155
+ },
1156
+ {
1157
+ "type": "text",
1158
+ "text": "First, our CPLL model outperforms all the baselines with different perturbation rates. Moreover, the higher the annotation inconsistency, the more our model improves relative to the baselines. Our model can reduce the influence of annotation inconsistency more effectively. Second, several baselines almost do not work with a large perturbation rate (e.g., $25\\%$ ), while our model can handle it effectively. The F1 score of Seqcrowd is only less than 20 when the rate $r$ is larger than $20\\%$ . Third, it is obvious that the annotation quality will affect the performance of the model largely. The higher the inconsistency, the worse the quality of the annotation and the worse the performance of the model.",
1159
+ "bbox": [
1160
+ 112,
1161
+ 694,
1162
+ 490,
1163
+ 917
1164
+ ],
1165
+ "page_idx": 6
1166
+ },
1167
+ {
1168
+ "type": "text",
1169
+ "text": "4.4 Influence of Hyper-parameter $\\alpha$",
1170
+ "text_level": 1,
1171
+ "bbox": [
1172
+ 507,
1173
+ 274,
1174
+ 806,
1175
+ 290
1176
+ ],
1177
+ "page_idx": 6
1178
+ },
1179
+ {
1180
+ "type": "text",
1181
+ "text": "We further investigate the influence of the hyperparameter $\\alpha$ (in Equation 5), which is used to balance the posterior and prior confidence (Figure 3). The prior confidence demonstrates the labeled confidence given by the annotators, which is biased due to the selection of annotators. To reduce this bias, we enhance our model to estimate the posterior confidence that is learned by the model.",
1182
+ "bbox": [
1183
+ 505,
1184
+ 296,
1185
+ 884,
1186
+ 423
1187
+ ],
1188
+ "page_idx": 6
1189
+ },
1190
+ {
1191
+ "type": "text",
1192
+ "text": "From the figures, we can observe the following observations. First, when the noise is high, the smaller the $\\alpha$ , the better the performance. Intuitively, the confidence given by annotators is not reliable when the perturbation rate $r$ is large. Second, when the noise is low, the trend that the larger the $\\alpha$ , the better the performance is relatively not as obvious. The reason is that the model can disambiguate the ground truth from the candidates easily since the data is clear. Most of the labels are correct and confidence is not important at this time. All the findings indicate that our confidence estimator can make use of prior confidence and learn posterior confidence effectively.",
1193
+ "bbox": [
1194
+ 507,
1195
+ 425,
1196
+ 884,
1197
+ 650
1198
+ ],
1199
+ "page_idx": 6
1200
+ },
1201
+ {
1202
+ "type": "text",
1203
+ "text": "5 Related Work",
1204
+ "text_level": 1,
1205
+ "bbox": [
1206
+ 507,
1207
+ 664,
1208
+ 665,
1209
+ 678
1210
+ ],
1211
+ "page_idx": 6
1212
+ },
1213
+ {
1214
+ "type": "text",
1215
+ "text": "In this section, we mainly review the most related works about named entity recognition (Section 5.1) and partial label learning (Section 5.2).",
1216
+ "bbox": [
1217
+ 507,
1218
+ 690,
1219
+ 882,
1220
+ 739
1221
+ ],
1222
+ "page_idx": 6
1223
+ },
1224
+ {
1225
+ "type": "text",
1226
+ "text": "5.1 Named Entity Recognition",
1227
+ "text_level": 1,
1228
+ "bbox": [
1229
+ 507,
1230
+ 752,
1231
+ 764,
1232
+ 768
1233
+ ],
1234
+ "page_idx": 6
1235
+ },
1236
+ {
1237
+ "type": "text",
1238
+ "text": "Named Entity Recognition (NER) is a research hotspot since it can be applied to many downstream Natural language Processing (NLP) tasks. A well-trained NER model takes language sequence as input and marks out all the entities in the sequence with the correct entity type. NER is widely treated as a sequence labeling problem, a token-level tagging task (Chiu and Nichols, 2015; Akbik et al., 2018; Yan et al., 2019). Also, some of the re",
1239
+ "bbox": [
1240
+ 507,
1241
+ 774,
1242
+ 884,
1243
+ 917
1244
+ ],
1245
+ "page_idx": 6
1246
+ },
1247
+ {
1248
+ "type": "page_number",
1249
+ "text": "1381",
1250
+ "bbox": [
1251
+ 482,
1252
+ 928,
1253
+ 517,
1254
+ 940
1255
+ ],
1256
+ "page_idx": 6
1257
+ },
1258
+ {
1259
+ "type": "image",
1260
+ "img_path": "images/e44a579cf6cd3455f8f95db33b3339fb598503fde02cc99d42278c77454d10ff.jpg",
1261
+ "image_caption": [
1262
+ "(a) Weibo"
1263
+ ],
1264
+ "image_footnote": [],
1265
+ "bbox": [
1266
+ 136,
1267
+ 80,
1268
+ 378,
1269
+ 219
1270
+ ],
1271
+ "page_idx": 7
1272
+ },
1273
+ {
1274
+ "type": "image",
1275
+ "img_path": "images/e01e4b9e6925cd3f162f34a5c3c8e50e90c9cb4766a3e8a71df886139f34937f.jpg",
1276
+ "image_caption": [
1277
+ "(b)Resume",
1278
+ "Figure 3: The influence of hyper-parameter $\\alpha$ , which is leveraged to control the balance between the posterior and prior confidence."
1279
+ ],
1280
+ "image_footnote": [],
1281
+ "bbox": [
1282
+ 379,
1283
+ 80,
1284
+ 620,
1285
+ 219
1286
+ ],
1287
+ "page_idx": 7
1288
+ },
1289
+ {
1290
+ "type": "image",
1291
+ "img_path": "images/2d3b22879db84e2f0704ab24b9b15e8629e94e2073ca0864cab2a90a2fcf9d59.jpg",
1292
+ "image_caption": [
1293
+ "(c) Ontonotes"
1294
+ ],
1295
+ "image_footnote": [],
1296
+ "bbox": [
1297
+ 621,
1298
+ 80,
1299
+ 863,
1300
+ 219
1301
+ ],
1302
+ "page_idx": 7
1303
+ },
1304
+ {
1305
+ "type": "text",
1306
+ "text": "searchers regard NER as a span-level classification task (Xue et al., 2020; Fu et al., 2021; Alemi et al., 2023). In these works, NER is a fully-supervised learning task based on large-scale labeled data, where each token is asserted with a golden label.",
1307
+ "bbox": [
1308
+ 112,
1309
+ 312,
1310
+ 487,
1311
+ 391
1312
+ ],
1313
+ "page_idx": 7
1314
+ },
1315
+ {
1316
+ "type": "text",
1317
+ "text": "Crowdsourcing platforms (e.g., Amazon Mechanical Turk) are a popular way to obtain large labeled data. Due to the large label space and complexity of NER, the quality of labeled data is low. The ground truth obtained by simple majority voting contains a lot of noise, which limits the performance of the model largely. There is some literature that trains the model from multiple annotators directly (Simpson and Gurevych, 2019; Nguyen et al., 2017). They mainly focus on modeling the differences among annotators to find a trustworthy annotator. In fact, a sentence may not be correctly labeled by all the annotators while they all may label part of the right entities. To address this problem, we translate this task into a partial label learning problem with a prior confidence score.",
1318
+ "bbox": [
1319
+ 115,
1320
+ 394,
1321
+ 489,
1322
+ 651
1323
+ ],
1324
+ "page_idx": 7
1325
+ },
1326
+ {
1327
+ "type": "text",
1328
+ "text": "5.2 Partial Label Learning",
1329
+ "text_level": 1,
1330
+ "bbox": [
1331
+ 112,
1332
+ 669,
1333
+ 341,
1334
+ 686
1335
+ ],
1336
+ "page_idx": 7
1337
+ },
1338
+ {
1339
+ "type": "text",
1340
+ "text": "Unlike fully-supervised learning, which uses data with golden label $\\mathbf{y}$ , Partial Label Learning (PLL) asserts a candidate set $\\mathcal{V}$ for each input $\\mathbf{x}$ (Zhang et al., 2016; Wang et al., 2023; Lv et al., 2020). Despite the fact that we can not ensure golden label $\\mathbf{y}$ always in the candidate set $\\mathcal{V}$ , most PLL researchers assume one of the candidate labels is the golden label for simplicity. The existing studies about PLL can be categorized into two groups, average-based methods (Zhang and Yu, 2015) and identification-based methods (Jin and Ghahramani, 2002; Lyu et al., 2019). Average-based methods (Zhang and Yu, 2015; Hüllermeier and Beringer, 2006) intuitively treat the candidate labels with",
1341
+ "bbox": [
1342
+ 112,
1343
+ 694,
1344
+ 489,
1345
+ 917
1346
+ ],
1347
+ "page_idx": 7
1348
+ },
1349
+ {
1350
+ "type": "text",
1351
+ "text": "equal importance. The main weakness of these algorithms is that the false positive may severely distract the model with wrong label information. Recently, identification-based methods (Jin and Ghahramani, 2002; Wang et al., 2023) are proposed to identify the truth label from the candidates by regarding the ground truth as a latent variable. More and more literature pays attention to representative methods (Lyu et al., 2019; Nguyen and Caruana, 2008), self-training methods (Wen et al., 2021), loss function adjustments (Wu and Zhang, 2018).",
1352
+ "bbox": [
1353
+ 507,
1354
+ 312,
1355
+ 884,
1356
+ 488
1357
+ ],
1358
+ "page_idx": 7
1359
+ },
1360
+ {
1361
+ "type": "text",
1362
+ "text": "However, most of the current work focuses on image classification or text classification tasks, while how to model the confidence for NER is not well studied. The sequence labeling task aims to identify the entities in the sentence with an entity type in the token level. Thus, how to model the token self and its content also plays an important role in this task. To address this problem, we design a confidence estimator to predict the token- and content-dependent confidence based on the prior confidence given by annotators.",
1363
+ "bbox": [
1364
+ 507,
1365
+ 489,
1366
+ 882,
1367
+ 667
1368
+ ],
1369
+ "page_idx": 7
1370
+ },
1371
+ {
1372
+ "type": "text",
1373
+ "text": "6 Conclusion and Future Work",
1374
+ "text_level": 1,
1375
+ "bbox": [
1376
+ 507,
1377
+ 681,
1378
+ 794,
1379
+ 697
1380
+ ],
1381
+ "page_idx": 7
1382
+ },
1383
+ {
1384
+ "type": "text",
1385
+ "text": "In this paper, we translate crowd-annotated NER into a PLL problem and propose a CPLL model based on an EM algorithm. To rectify the model's prediction, we design a confidence estimator to predict token- and content-dependent confidence by incorporating prior confidence with posterior confidence. We conduct the experiments on one real-world dataset and four synthetic datasets to evaluate the performance of our proposed CPLL model by comparing it with several state-of-the-art baselines. Moreover, we do ablation studies to verify the effectiveness of the key components and explore the influence of annotation inconsistency.",
1386
+ "bbox": [
1387
+ 505,
1388
+ 709,
1389
+ 882,
1390
+ 917
1391
+ ],
1392
+ "page_idx": 7
1393
+ },
1394
+ {
1395
+ "type": "page_number",
1396
+ "text": "1382",
1397
+ "bbox": [
1398
+ 482,
1399
+ 927,
1400
+ 519,
1401
+ 940
1402
+ ],
1403
+ "page_idx": 7
1404
+ },
1405
+ {
1406
+ "type": "text",
1407
+ "text": "In the future, we would like to investigate the performance of our model on other sequence labeling tasks.",
1408
+ "bbox": [
1409
+ 112,
1410
+ 84,
1411
+ 489,
1412
+ 131
1413
+ ],
1414
+ "page_idx": 8
1415
+ },
1416
+ {
1417
+ "type": "text",
1418
+ "text": "Limitations",
1419
+ "text_level": 1,
1420
+ "bbox": [
1421
+ 114,
1422
+ 148,
1423
+ 220,
1424
+ 164
1425
+ ],
1426
+ "page_idx": 8
1427
+ },
1428
+ {
1429
+ "type": "text",
1430
+ "text": "Although our work shows that our CPLL model can learn from crowd-annotated NER data well, there are at least two limitations. First, we set the hyperparameter $\\alpha$ manually. It would be better if we could design a strategy to learn a alpha adaptive value for each sample atomically. Second, though we mainly experiment on NER tasks, our model can be applied to all sequence labeling tasks, such as part-of-speech tagging (POS), Chinese word segmentation, and so on. We would like to explore it in further work.",
1431
+ "bbox": [
1432
+ 112,
1433
+ 177,
1434
+ 490,
1435
+ 353
1436
+ ],
1437
+ "page_idx": 8
1438
+ },
1439
+ {
1440
+ "type": "text",
1441
+ "text": "Acknowledgements",
1442
+ "text_level": 1,
1443
+ "bbox": [
1444
+ 114,
1445
+ 370,
1446
+ 285,
1447
+ 386
1448
+ ],
1449
+ "page_idx": 8
1450
+ },
1451
+ {
1452
+ "type": "text",
1453
+ "text": "The authors wish to thank the anonymous reviewers for their helpful comments. This work was partially funded by National Natural Science Foundation of China (No.62206057), Shanghai Rising-Star Program (23QA1400200), Natural Science Foundation of Shanghai (23ZR1403500), and CCF-Tencent Open Fund.",
1454
+ "bbox": [
1455
+ 112,
1456
+ 399,
1457
+ 489,
1458
+ 511
1459
+ ],
1460
+ "page_idx": 8
1461
+ },
1462
+ {
1463
+ "type": "text",
1464
+ "text": "References",
1465
+ "text_level": 1,
1466
+ "bbox": [
1467
+ 114,
1468
+ 542,
1469
+ 213,
1470
+ 557
1471
+ ],
1472
+ "page_idx": 8
1473
+ },
1474
+ {
1475
+ "type": "list",
1476
+ "sub_type": "ref_text",
1477
+ "list_items": [
1478
+ "Alan Akbik, Duncan A. J. Blythe, and Roland Vollgraf. 2018. Contextual string embeddings for sequence labeling. international conference on computational linguistics.",
1479
+ "Alexander Alemi, Ian Fischer, Joshua Dillon, Jacob Devlin, Ming-Wei Chang, Kenton Lee, Marco Federici, Anjan Dutta, Patrick Forre, Nate Kush, Robert Geirhos, Jorn-Henrik Jacobsen, Richard Michaelis, and Wieland Zemel. 2023. Miner: Improving out-of-vocabulary named entity recognition from an information theoretic perspective. meeting of the association for computational linguistics.",
1480
+ "Nguyen Bach and Sameer Badaskar. 2007. A review of relation extraction. Literature review for Language and Statistics II, 2:1-15.",
1481
+ "Jason P.C. Chiu and Eric Nichols. 2015. Named entity recognition with bidirectional lstm-cnns. Transactions of the Association for Computational Linguistics.",
1482
+ "Timothy Cour, Ben Sapp, and Ben Taskar. 2011. Learning from partial labels. The Journal of Machine Learning Research, 12:1501-1536."
1483
+ ],
1484
+ "bbox": [
1485
+ 115,
1486
+ 567,
1487
+ 489,
1488
+ 917
1489
+ ],
1490
+ "page_idx": 8
1491
+ },
1492
+ {
1493
+ "type": "list",
1494
+ "sub_type": "ref_text",
1495
+ "list_items": [
1496
+ "Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Ziqing Yang, Shijin Wang, and Guoping Hu. 2019. Pretraining with whole word masking for chinese bert. arXiv: Computation and Language.",
1497
+ "Arthur P Dempster, Nan M Laird, and Donald B Rubin. 1977. Maximum likelihood from incomplete data via the em algorithm. Journal of the Royal Statistical Society: Series B (Methodological), 39(1):1-22.",
1498
+ "Lei Feng and Bo An. 2019. Partial label learning with self-guided retraining. In Proceedings of the AAAI conference on artificial intelligence, volume 33, pages 3542-3549.",
1499
+ "Lei Feng, Jiaqi Lv, Bo Han, Miao Xu, Gang Niu, Xin Geng, Bo An, and Masashi Sugiyama. 2020. Provably consistent partial-label learning. Advances in Neural Information Processing Systems, 33:10948-10960.",
1500
+ "Jinlan Fu, Xuanjing Huang, and Pengfei Liu. 2021. Spanner: Named entity re-/recognition as span prediction. meeting of the association for computational linguistics.",
1501
+ "Eyke Hüllermeier and Jürgen Beringer. 2006. Learning from ambiguously labeled examples. Intelligent Data Analysis, 10(5):419-439.",
1502
+ "Rong Jin and Zoubin Ghahramani. 2002. Learning with multiple labels. Advances in neural information processing systems, 15.",
1503
+ "Jacob Devlin Ming-Wei Chang Kenton and Lee Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of NAACL-HLT, pages 4171-4186.",
1504
+ "Youngdong Kim, Junho Yim, Juseung Yun, and Junmo Kim. 2019. NInl: Negative learning for noisy labels. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 101-110.",
1505
+ "Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.",
1506
+ "Gina-Anne Levow. 2006. The third international Chinese language processing bakeoff: Word segmentation and named entity recognition. In Proceedings of the Fifth SIGHAN Workshop on Chinese Language Processing, pages 108-117.",
1507
+ "Jiaqi Lv, Miao Xu, Lei Feng, Gang Niu, Xin Geng, and Masashi Sugiyama. 2020. Progressive identification of true labels for partial-label learning. In International Conference on Machine Learning, pages 6500-6510. PMLR.",
1508
+ "Gengyu Lyu, Songhe Feng, Tao Wang, Congyan Lang, and Yidong Li. 2019. Gm-pll: graph matching based partial label learning. IEEE Transactions on Knowledge and Data Engineering, 33(2):521-535."
1509
+ ],
1510
+ "bbox": [
1511
+ 510,
1512
+ 85,
1513
+ 884,
1514
+ 917
1515
+ ],
1516
+ "page_idx": 8
1517
+ },
1518
+ {
1519
+ "type": "page_number",
1520
+ "text": "1383",
1521
+ "bbox": [
1522
+ 482,
1523
+ 928,
1524
+ 519,
1525
+ 940
1526
+ ],
1527
+ "page_idx": 8
1528
+ },
1529
+ {
1530
+ "type": "list",
1531
+ "sub_type": "ref_text",
1532
+ "list_items": [
1533
+ "An T Nguyen, Byron C Wallace, Junyi Jessy Li, An Nenkova, and Matthew Lease. 2017. Aggregating and predicting sequence labels from crowd annotations. In Proceedings of the conference. Association for Computational Linguistics. Meeting, volume 2017, page 299. NIH Public Access.",
1534
+ "Nam Nguyen and Rich Caruana. 2008. Classification with partial labels. In Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 551-559.",
1535
+ "Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Z. Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. Pytorch: An imperative style, high-performance deep learning library. neural information processing systems.",
1536
+ "Nanyun Peng and Mark Dredze. 2015. Named entity recognition for chinese social media with jointly trained embeddings. In Proceedings of the 2015 conference on empirical methods in natural language processing, pages 548-554.",
1537
+ "Filipe Rodrigues, Francisco Pereira, and Bernardete Ribeiro. 2014. Sequence labeling with multiple annotators. Machine learning, 95(2):165-181.",
1538
+ "Edwin Simpson and Iryna Gurevych. 2019. A Bayesian approach for sequence tagging with crowds. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1093-1104, Hong Kong, China. Association for Computational Linguistics.",
1539
+ "David Wadden, Ulme Wennberg, Yi Luan, and Hannaneh Hajishirzi. 2019. Entity, relation, and event extraction with contextualized span representations. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5784-5789.",
1540
+ "Haobo Wang, Ruixuan Xiao, Yixuan Li, Lei Feng, Gang Niu, Gang Chen, and Junbo Zhao. 2023. Pico: Contrastive label disambiguation for partial label learning. Learning.",
1541
+ "Ralph Weischedel, Sameer Pradhan, Lance Ramshaw, Martha Palmer, Nianwen Xue, Mitchell Marcus, Ann Taylor, Craig Greenberg, Eduard Hovy, Robert Belvin, et al. 2011. Ontonotes release 4.0. LDC2011T03, Philadelphia, Penn.: Linguistic Data Consortium.",
1542
+ "Hongwei Wen, Jingyi Cui, Hanyuan Hang, Jiabin Liu, Yisen Wang, and Zhouchen Lin. 2021. Leveraged weighted loss for partial label learning. In International Conference on Machine Learning, pages 11091-11100. PMLR."
1543
+ ],
1544
+ "bbox": [
1545
+ 115,
1546
+ 85,
1547
+ 485,
1548
+ 916
1549
+ ],
1550
+ "page_idx": 9
1551
+ },
1552
+ {
1553
+ "type": "list",
1554
+ "sub_type": "ref_text",
1555
+ "list_items": [
1556
+ "Xuan Wu and Min-Ling Zhang. 2018. Towards enabling binary decomposition for partial label learning. In *IJCAI*, pages 2868-2874.",
1557
+ "Mengge Xue, Bowen Yu, Zhenyu Zhang, Tingwen Liu, Yue Zhang, and Bin Wang. 2020. Coarse-to-fine pre-training for named entity recognition. empirical methods in natural language processing.",
1558
+ "Vikas Yadav and Steven Bethard. 2018. A survey on recent advances in named entity recognition from deep learning models. In 27th International Conference on Computational Linguistics, COLING 2018, pages 2145-2158. Association for Computational Linguistics (ACL).",
1559
+ "Hang Yan, Bocao Deng, Xiaonan Li, and Xipeng Qiu 2019. Tener: Adapting transformer encoder for named entity recognition. arXiv: Computation and Language.",
1560
+ "Yan Yan and Yuhong Guo. 2020. Partial label learning with batch label correction. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 6575-6582.",
1561
+ "YaoSheng Yang, Meishan Zhang, Wenliang Chen, Wei Zhang, Haofen Wang, and Min Zhang. 2018. Adversarial learning for chinese ner from crowd annotations. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 32.",
1562
+ "Min-Ling Zhang and Fei Yu. 2015. Solving the partial label learning problem: An instance-based approach In Twenty-fourth international joint conference on artificial intelligence.",
1563
+ "Min-Ling Zhang, Bin-Bin Zhou, and Xu-Ying Liu. 2016. Partial label learning via feature-aware disambiguation. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1335-1344.",
1564
+ "Yue Zhang and Jie Yang. 2018. Chinese ner using lattice LSTM. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1554-1564.",
1565
+ "Jie Zhou, Qi Zhang, Qin Chen, Liang He, and Xuan-Jing Huang. 2022. A multi-format transfer learning model for event argument extraction via variational information bottleneck. In Proceedings of the 29th International Conference on Computational Linguistics, pages 1990-2000."
1566
+ ],
1567
+ "bbox": [
1568
+ 510,
1569
+ 85,
1570
+ 880,
1571
+ 766
1572
+ ],
1573
+ "page_idx": 9
1574
+ },
1575
+ {
1576
+ "type": "page_number",
1577
+ "text": "1384",
1578
+ "bbox": [
1579
+ 482,
1580
+ 928,
1581
+ 519,
1582
+ 940
1583
+ ],
1584
+ "page_idx": 9
1585
+ },
1586
+ {
1587
+ "type": "text",
1588
+ "text": "A For every submission:",
1589
+ "bbox": [
1590
+ 114,
1591
+ 107,
1592
+ 322,
1593
+ 122
1594
+ ],
1595
+ "page_idx": 10
1596
+ },
1597
+ {
1598
+ "type": "text",
1599
+ "text": "A1. Did you describe the limitations of your work?",
1600
+ "bbox": [
1601
+ 129,
1602
+ 126,
1603
+ 532,
1604
+ 143
1605
+ ],
1606
+ "page_idx": 10
1607
+ },
1608
+ {
1609
+ "type": "text",
1610
+ "text": "Section Limitations",
1611
+ "bbox": [
1612
+ 149,
1613
+ 143,
1614
+ 297,
1615
+ 156
1616
+ ],
1617
+ "page_idx": 10
1618
+ },
1619
+ {
1620
+ "type": "text",
1621
+ "text": "A2. Did you discuss any potential risks of your work?",
1622
+ "bbox": [
1623
+ 129,
1624
+ 170,
1625
+ 552,
1626
+ 186
1627
+ ],
1628
+ "page_idx": 10
1629
+ },
1630
+ {
1631
+ "type": "text",
1632
+ "text": "Our work does not have any potential risks.",
1633
+ "bbox": [
1634
+ 149,
1635
+ 187,
1636
+ 473,
1637
+ 200
1638
+ ],
1639
+ "page_idx": 10
1640
+ },
1641
+ {
1642
+ "type": "text",
1643
+ "text": "A3. Do the abstract and introduction summarize the paper's main claims?",
1644
+ "bbox": [
1645
+ 129,
1646
+ 212,
1647
+ 695,
1648
+ 228
1649
+ ],
1650
+ "page_idx": 10
1651
+ },
1652
+ {
1653
+ "type": "text",
1654
+ "text": "Abstract and Section 1. Introduction",
1655
+ "bbox": [
1656
+ 149,
1657
+ 230,
1658
+ 421,
1659
+ 243
1660
+ ],
1661
+ "page_idx": 10
1662
+ },
1663
+ {
1664
+ "type": "text",
1665
+ "text": "A4. Have you used AI writing assistants when working on this paper?",
1666
+ "bbox": [
1667
+ 129,
1668
+ 255,
1669
+ 668,
1670
+ 272
1671
+ ],
1672
+ "page_idx": 10
1673
+ },
1674
+ {
1675
+ "type": "text",
1676
+ "text": "Left blank.",
1677
+ "bbox": [
1678
+ 149,
1679
+ 273,
1680
+ 231,
1681
+ 287
1682
+ ],
1683
+ "page_idx": 10
1684
+ },
1685
+ {
1686
+ "type": "text",
1687
+ "text": "B Did you use or create scientific artifacts?",
1688
+ "bbox": [
1689
+ 114,
1690
+ 300,
1691
+ 489,
1692
+ 316
1693
+ ],
1694
+ "page_idx": 10
1695
+ },
1696
+ {
1697
+ "type": "text",
1698
+ "text": "Left blank.",
1699
+ "bbox": [
1700
+ 132,
1701
+ 321,
1702
+ 213,
1703
+ 336
1704
+ ],
1705
+ "page_idx": 10
1706
+ },
1707
+ {
1708
+ "type": "text",
1709
+ "text": "B1. Did you cite the creators of artifacts you used?",
1710
+ "bbox": [
1711
+ 127,
1712
+ 347,
1713
+ 529,
1714
+ 363
1715
+ ],
1716
+ "page_idx": 10
1717
+ },
1718
+ {
1719
+ "type": "text",
1720
+ "text": "No response.",
1721
+ "bbox": [
1722
+ 149,
1723
+ 363,
1724
+ 248,
1725
+ 379
1726
+ ],
1727
+ "page_idx": 10
1728
+ },
1729
+ {
1730
+ "type": "text",
1731
+ "text": "B2. Did you discuss the license or terms for use and / or distribution of any artifacts?",
1732
+ "bbox": [
1733
+ 127,
1734
+ 390,
1735
+ 778,
1736
+ 406
1737
+ ],
1738
+ "page_idx": 10
1739
+ },
1740
+ {
1741
+ "type": "text",
1742
+ "text": "No response.",
1743
+ "bbox": [
1744
+ 149,
1745
+ 407,
1746
+ 248,
1747
+ 422
1748
+ ],
1749
+ "page_idx": 10
1750
+ },
1751
+ {
1752
+ "type": "text",
1753
+ "text": "B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?",
1754
+ "bbox": [
1755
+ 127,
1756
+ 432,
1757
+ 880,
1758
+ 495
1759
+ ],
1760
+ "page_idx": 10
1761
+ },
1762
+ {
1763
+ "type": "text",
1764
+ "text": "No response.",
1765
+ "bbox": [
1766
+ 149,
1767
+ 498,
1768
+ 248,
1769
+ 513
1770
+ ],
1771
+ "page_idx": 10
1772
+ },
1773
+ {
1774
+ "type": "text",
1775
+ "text": "B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?",
1776
+ "bbox": [
1777
+ 127,
1778
+ 524,
1779
+ 880,
1780
+ 571
1781
+ ],
1782
+ "page_idx": 10
1783
+ },
1784
+ {
1785
+ "type": "text",
1786
+ "text": "No response.",
1787
+ "bbox": [
1788
+ 149,
1789
+ 574,
1790
+ 248,
1791
+ 588
1792
+ ],
1793
+ "page_idx": 10
1794
+ },
1795
+ {
1796
+ "type": "text",
1797
+ "text": "B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?",
1798
+ "bbox": [
1799
+ 127,
1800
+ 599,
1801
+ 880,
1802
+ 631
1803
+ ],
1804
+ "page_idx": 10
1805
+ },
1806
+ {
1807
+ "type": "text",
1808
+ "text": "No response.",
1809
+ "bbox": [
1810
+ 149,
1811
+ 633,
1812
+ 248,
1813
+ 646
1814
+ ],
1815
+ "page_idx": 10
1816
+ },
1817
+ {
1818
+ "type": "text",
1819
+ "text": "B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.",
1820
+ "bbox": [
1821
+ 127,
1822
+ 658,
1823
+ 880,
1824
+ 739
1825
+ ],
1826
+ "page_idx": 10
1827
+ },
1828
+ {
1829
+ "type": "text",
1830
+ "text": "No response.",
1831
+ "bbox": [
1832
+ 149,
1833
+ 740,
1834
+ 248,
1835
+ 753
1836
+ ],
1837
+ "page_idx": 10
1838
+ },
1839
+ {
1840
+ "type": "text",
1841
+ "text": "C Did you run computational experiments?",
1842
+ "bbox": [
1843
+ 114,
1844
+ 764,
1845
+ 492,
1846
+ 781
1847
+ ],
1848
+ "page_idx": 10
1849
+ },
1850
+ {
1851
+ "type": "text",
1852
+ "text": "3.3 Implementation Details and Metrics",
1853
+ "bbox": [
1854
+ 132,
1855
+ 787,
1856
+ 430,
1857
+ 801
1858
+ ],
1859
+ "page_idx": 10
1860
+ },
1861
+ {
1862
+ "type": "text",
1863
+ "text": "C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used?",
1864
+ "bbox": [
1865
+ 129,
1866
+ 812,
1867
+ 880,
1868
+ 845
1869
+ ],
1870
+ "page_idx": 10
1871
+ },
1872
+ {
1873
+ "type": "text",
1874
+ "text": "3.3 Implementation Details and Metrics",
1875
+ "bbox": [
1876
+ 149,
1877
+ 846,
1878
+ 447,
1879
+ 860
1880
+ ],
1881
+ "page_idx": 10
1882
+ },
1883
+ {
1884
+ "type": "text",
1885
+ "text": "The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.",
1886
+ "bbox": [
1887
+ 112,
1888
+ 868,
1889
+ 877,
1890
+ 892
1891
+ ],
1892
+ "page_idx": 10
1893
+ },
1894
+ {
1895
+ "type": "header",
1896
+ "text": "ACL 2023 Responsible NLP Checklist",
1897
+ "bbox": [
1898
+ 132,
1899
+ 84,
1900
+ 433,
1901
+ 99
1902
+ ],
1903
+ "page_idx": 10
1904
+ },
1905
+ {
1906
+ "type": "page_number",
1907
+ "text": "1385",
1908
+ "bbox": [
1909
+ 482,
1910
+ 928,
1911
+ 519,
1912
+ 940
1913
+ ],
1914
+ "page_idx": 10
1915
+ },
1916
+ {
1917
+ "type": "text",
1918
+ "text": "C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?",
1919
+ "bbox": [
1920
+ 129,
1921
+ 83,
1922
+ 878,
1923
+ 115
1924
+ ],
1925
+ "page_idx": 11
1926
+ },
1927
+ {
1928
+ "type": "text",
1929
+ "text": "3.3 Implementation Details and Metrics",
1930
+ "bbox": [
1931
+ 149,
1932
+ 117,
1933
+ 447,
1934
+ 131
1935
+ ],
1936
+ "page_idx": 11
1937
+ },
1938
+ {
1939
+ "type": "text",
1940
+ "text": "C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?",
1941
+ "bbox": [
1942
+ 127,
1943
+ 143,
1944
+ 878,
1945
+ 190
1946
+ ],
1947
+ "page_idx": 11
1948
+ },
1949
+ {
1950
+ "type": "text",
1951
+ "text": "We run our model using the same seed and select the best based on the development set.",
1952
+ "bbox": [
1953
+ 149,
1954
+ 192,
1955
+ 796,
1956
+ 206
1957
+ ],
1958
+ "page_idx": 11
1959
+ },
1960
+ {
1961
+ "type": "text",
1962
+ "text": "C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)?",
1963
+ "bbox": [
1964
+ 127,
1965
+ 218,
1966
+ 878,
1967
+ 265
1968
+ ],
1969
+ "page_idx": 11
1970
+ },
1971
+ {
1972
+ "type": "text",
1973
+ "text": "Not applicable. Left blank.",
1974
+ "bbox": [
1975
+ 149,
1976
+ 267,
1977
+ 349,
1978
+ 282
1979
+ ],
1980
+ "page_idx": 11
1981
+ },
1982
+ {
1983
+ "type": "text",
1984
+ "text": "D Did you use human annotators (e.g., crowdworkers) or research with human participants?",
1985
+ "bbox": [
1986
+ 114,
1987
+ 292,
1988
+ 875,
1989
+ 309
1990
+ ],
1991
+ "page_idx": 11
1992
+ },
1993
+ {
1994
+ "type": "text",
1995
+ "text": "3.1 Datasets",
1996
+ "bbox": [
1997
+ 132,
1998
+ 313,
1999
+ 228,
2000
+ 328
2001
+ ],
2002
+ "page_idx": 11
2003
+ },
2004
+ {
2005
+ "type": "list",
2006
+ "sub_type": "text",
2007
+ "list_items": [
2008
+ "D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?",
2009
+ "3.1 Datasets",
2010
+ "D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)?",
2011
+ "3.1 Datasets",
2012
+ "D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?",
2013
+ "3.1 Datasets",
2014
+ "D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Not applicable. Left blank.",
2015
+ "D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?"
2016
+ ],
2017
+ "bbox": [
2018
+ 127,
2019
+ 338,
2020
+ 878,
2021
+ 623
2022
+ ],
2023
+ "page_idx": 11
2024
+ },
2025
+ {
2026
+ "type": "text",
2027
+ "text": "Not applicable. Left blank.",
2028
+ "bbox": [
2029
+ 149,
2030
+ 626,
2031
+ 349,
2032
+ 640
2033
+ ],
2034
+ "page_idx": 11
2035
+ },
2036
+ {
2037
+ "type": "page_number",
2038
+ "text": "1386",
2039
+ "bbox": [
2040
+ 482,
2041
+ 928,
2042
+ 519,
2043
+ 940
2044
+ ],
2045
+ "page_idx": 11
2046
+ }
2047
+ ]
2023/A Confidence-based Partial Label Learning Model for Crowd-Annotated Named Entity Recognition/1bad24cc-7495-4411-88f1-9e429d01bb74_model.json ADDED
The diff for this file is too large to render. See raw diff
 
2023/A Confidence-based Partial Label Learning Model for Crowd-Annotated Named Entity Recognition/1bad24cc-7495-4411-88f1-9e429d01bb74_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f80d71379e1624b28ad870ab72b486702370047b146fb7f082c8faa1d3244e17
3
+ size 748890
2023/A Confidence-based Partial Label Learning Model for Crowd-Annotated Named Entity Recognition/full.md ADDED
@@ -0,0 +1,384 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # A Confidence-based Partial Label Learning Model for Crowd-Annotated Named Entity Recognition
2
+
3
+ Limalo Xiong $^{1}$ , Jie Zhou $^{1*}$ , Qunxi Zhu $^{2}$ , Xiao Wang $^{1}$ , Yuanbin Wu $^{3}$ , Qi Zhang $^{1}$ , Tao Gui $^{4}$ , Xuanjing Huang $^{1}$ , Jin Ma $^{5}$ , Ying Shan $^{5}$
4
+
5
+ $^{1}$ School of Computer Science, Fudan University
6
+
7
+ $^{2}$ Research Institute of Intelligent Complex Systems, Fudan University
8
+
9
+ <sup>3</sup> The Department of Computer Science and Technology, East China Normal University
10
+
11
+ $^{4}$ Institute of Modern Languages and Linguistics, Fudan University
12
+
13
+ Applied Research Center (ARC), Tencent PCG
14
+
15
+ # Abstract
16
+
17
+ Existing models for named entity recognition (NER) are mainly based on large-scale labeled datasets, which always obtain using crowdsourcing. However, it is hard to obtain a unified and correct label via majority voting from multiple annotators for NER due to the large labeling space and complexity of this task. To address this problem, we aim to utilize the original multi-annotator labels directly. Particularly, we propose a Confidence-based Partial Label Learning (CPLL) method to integrate the prior confidence (given by annotators) and posterior confidence (learned by models) for crowd-annotated NER. This model learns a token- and content-dependent confidence via an Expectation-Maximization (EM) algorithm by minimizing empirical risk. The true posterior estimator and confidence estimator perform iteratively to update the true posterior and confidence respectively. We conduct extensive experimental results on both real-world and synthetic datasets, which show that our model can improve performance effectively compared with strong baselines.
18
+
19
+ # 1 Introduction
20
+
21
+ Named entity recognition (NER) plays a fundamental role in many downstream natural language processing (NLP) tasks, such as relation extraction (Bach and Badaskar, 2007), event extraction (Wadden et al., 2019; Zhou et al., 2022). Recently, by leveraging deep learning models, existing NER systems have witnessed superior performances on NER datasets. However, these models typically require a massive amount of labeled training data, such as MSRA (Levow, 2006), Ontonotes 4.0 (Weischedel et al., 2011), and Resume (Zhang and Yang, 2018). In real applications, we often need to consider new types of entities in new domains where we do not have existing annotated. The majority way to label the data at a lower cost
22
+
23
+ is crowdsourcing (Peng and Dredze, 2015), which labels the data using multiple annotators.
24
+
25
+ The crowd-annotated datasets are always low quality for the following two reasons. First, as an exchange, crowd annotations are always nonexperts. Various annotators may have different interpretations of labeling guidelines. Moreover, they may make mistakes in the labeling process. It is hard to require a number of annotations to reach an agreement. For example, annotator 1 labels "David and Jack" as a PER entity, while the correct label is "David" and "Jack" under our guidelines (Table 1). Also we should label the continuous time and place as one entity (e.g., "tomorrow at 10:00 a.m." and "company (room 1003)"). Second, due to the ambiguous word boundaries and complex composition, the NER task is more challenging compared with the text classification tasks. Annotator 3 ignores the token "a.m." for the time entity and adds "the" as part of the place entity falsely. Also, he/she misses the person entities in the text. In this paper, we focus on building a powerful NER system based on crowd-annotated data, which is of low quality.
26
+
27
+ There are two main ways to utilize crowd-annotated data. One simple and important way to obtain high-quality annotations for each input instance is majority voting. As shown in Table 1, the majority voting method can not obtain the correct answers from these three annotations well. The right labels (e.g., "David", "Jack", "tomorrow at 10:00 a.m.", and "company (room 1003)") are only annotated by annotators 1 and 2 once. Another majority of work models the differences among annotators by finding the trustworthy annotators (Rodrigues et al., 2014; Nguyen et al., 2017; Yang et al., 2018). From Table 1, we can find that none of the three annotators labels the entities absolutely right. Thus, these two kinds of methods are a waste of human labor.
28
+
29
+ To address this problem, we translated this task into a partial label learning (PLL) problem, which
30
+
31
+ ![](images/d4264851e0b04a6c2cea9b52f9406d6003b42f7c76a6b9cc23a6f0fa8cae9a2a.jpg)
32
+ Table 1: The spans marked with blue, green, and red are time (TIME), person (PER), and place (PLACE) entities labeled by three annotators.
33
+
34
+ trains the model based on the dataset where each sample is assigned with a set of candidate labels (Cour et al., 2011; Wen et al., 2021). Thus, it is natural to utilize all human labor via PLL, which can be divided into two types: 1) average-based methods which consider each candidate class equally (Hüllermeier and Beringer, 2006; Zhang and Yu, 2015); 2) identification-based methods which predict the ground-truth label as a latent variable via a translation matrix to describe the scores of each candidate label (Feng and An, 2019; Yan and Guo, 2020; Feng et al., 2020). Despite extensive studies on PLL methods, there are still two challenges in our condition. One challenge (C1) is that these methods are criticized when the same candidate label occurs more than once. The general PLL is under the assumption that each candidate label is only been assigned once, while each sample may be assigned the same classes multiple times by the different annotators in our situation. Another challenge (C2) is that most of the existing studies about PLL focus on image or text classification tasks, while we focus on a more complex task, sequence labeling, where each token is asserted with a label. Thus, the token itself and its content should be considered in this task.
35
+
36
+ In this paper, we propose a Confidence-based Partial Label Learning (CPLL) model for crowd-annotated NER. For C1, we treat the classes' labeled number for each sample as prior confidence provided by the annotators. Also, we learn the confidence scores via an Expectation-Maximization (EM) algorithm (Dempster et al., 1977). We estimate the real conditional probability $P(Y = y|T = t, X = \mathbf{x})$ via a true posterior estimator based on the confidence that consists of the prior and posterior confidences. For C2, we learn a token- and content-dependent confidence via a confidence estimator to consider both the token $t$ and sequence input $\mathbf{x}$ , because the candidate labels are always token-dependent and content-dependent. In
37
+
38
+ fact, our model can be applied to all the sequence labeling tasks, such as word segment, part of speech, etc. We conduct a series of experiments on one real-world dataset and four synthetic datasets. The empirical results show that our model can make use of the crowd-annotated data effectively. We also explore the influence of annotation inconsistency and balance of prior and posterior confidences.
39
+
40
+ The main contributions of this work are listed as follows.
41
+
42
+ - To better utilize the crowd-annotated data, we propose a CPLL algorithm to incorporate the prior and posterior confidences for sequence labeling task (i.e., NER).
43
+ - To take the confidence scores into account, we design a true posterior estimator and confidence estimator to update the probability distribution of ground truth and token- and content-dependent confidence iteratively via the EM algorithm.
44
+ - Extensive experiments on both real-world and synthetic datasets show that our CPLL model outperforms the state-of-the-art baselines, which indicates that our model disambiguates the noise labels effectively.
45
+
46
+ # 2 Our Approach
47
+
48
+ In this section, we first give the formal definition of our task. Then, we provide an overview of our proposed CPLL model. Finally, we introduce the main components contained in our model.
49
+
50
+ # 2.1 Formal Definition
51
+
52
+ Given a training corpus $\mathcal{D} = \{\mathbf{x}_i, (\hat{Y}_i, A_i)\}_{i=1}^{|\mathcal{D}|}$ , where $\mathbf{x} = \{t_1, t_2, \dots, t_{|\mathbf{x}|}\}, (\hat{Y}, A) = \{(\hat{\mathbf{y}}_1, \mathbf{a}_1), (\hat{\mathbf{y}}_2, \mathbf{a}_2), \dots, (\hat{\mathbf{y}}_{|\mathbf{x}|}, \mathbf{a}_{|\mathbf{x}|})\}$ . Here, $\hat{\mathbf{y}} = \{y_1, y_2, \dots, y_{|\hat{\mathbf{y}}|}\}$ is the candidate label set of the token $t$ and $\mathbf{a} = [a_1, a_2, \dots, a_{|\hat{\mathbf{y}}|}]$ is
53
+
54
+ ![](images/8883048ca92168c4264d1195e913e59d0163e381f88b2f76505dcfd9806d1479.jpg)
55
+ Figure 1: The framework of our CPLL model, which consists of a true posterior estimator and confidence estimator. The true posterior estimator is used to predict the true posterior $P(Y = y|T = t, X = \mathbf{x})$ based on the confidence score learned by the confidence estimator. The confidence estimator learns the confidence based on the prior confidence obtained from annotators and the posterior confidence learned by the model.
56
+
57
+ labeled times obtained from annotations. Specifically, $a$ is the labeled times of candidate label $y$ for token $t$ . $\hat{\mathbf{y}} \in \{2^{\mathcal{Y}}\backslash \emptyset \backslash \mathcal{Y}\}$ where $\mathcal{V}$ is the label space and $2^{\mathcal{V}}$ means the power set. For the rest of this paper, $y$ denotes the true label of token $t$ in text $\mathbf{x}$ unless otherwise specified. The goal of this task is to predict the truth posterior probability $P(Y = y|T = t,X = \mathbf{x})$ of token $t$ in text $\mathbf{x}$ .
58
+
59
+ # 2.2 Overview
60
+
61
+ In this paper, we propose a CONfidence-based partial Label Learning (CPLL) model for crowd-annotated NER (Figure 1). Particularly, we learn the true posterior $P(Y = y|T = t,X = \mathbf{x})$ via a true posterior estimator $f$ and a confidence score $g(y;\hat{Y},t,\mathbf{x})$ by minimizing the following risk.
62
+
63
+ $$
64
+ R = \mathbb {E} _ {p (\mathbf {x}, \hat {\mathbf {y}})} \left[ \sum_ {t \in \mathbf {x}} \sum_ {y} \underbrace {g (y ; \hat {\mathbf {y}} , t , \mathbf {x})} _ {\text {C o n f i d e n c e}} * \underbrace {\mathcal {L} (f (y ; t , \mathbf {x}) , y)} _ {\text {T r u e p o s t e r i o r}} \right] \tag {1}
65
+ $$
66
+
67
+ where the classifier $f(y; t, \mathbf{x})$ is used to predict $P(Y = y | T = t, X = \mathbf{x})$ and $\mathcal{L}$ is the loss. Particularly, we rely on the Expectation-Maximization algorithm (Dempster et al., 1977) to find the maximum likelihood parameters of CPLL by regarding the ground truth as a latent variable. In the M-step, we train a naive classifier $f$ to predict the true posterior $P(Y = y | T = t, X = \mathbf{x})$ via a true posterior estimator (Section 2.3). In the E-step, we update
68
+
69
+ the confidence score via a confidence estimator (Section 2.4), which consists of the prior confi dences (calculated from annotations) and posterior confidences (learned by model).
70
+
71
+ # 2.3 True Posterior Estimator
72
+
73
+ First, we train a naive classifier as our true posterior estimator $f$ to infer the true posterior $P(Y = y|T = t,X = \mathbf{x})$ . To model the sequence, we adopt a pre-trained language model (BERT (Ken-ton and Toutanova, 2019)) $\mathcal{M}$ to learn a content-aware token representation. Specifically, we input the sequence $\mathbf{x} = \{t_1,t_2,\dots ,t_{|\mathbf{x}|}\}$ into $\mathcal{M}$ to obtain the sequence representations,
74
+
75
+ $$
76
+ H = \mathcal {M} (\mathbf {x}, \theta_ {\mathcal {M}}) \tag {2}
77
+ $$
78
+
79
+ where $\theta_{\mathcal{M}}$ is the parameters of $\mathcal{M}$ , $H = [h_1, h_2, \dots, h_{|\mathbf{x}|}]$ . $h$ is token $t$ 's content-aware representation.
80
+
81
+ Then, we utilize a fully connected layer (FC) to predict the probability distribution,
82
+
83
+ $$
84
+ f (y; t, \mathbf {x}) = \sigma (W * h + b) \tag {3}
85
+ $$
86
+
87
+ where $\sigma$ is a sigmoid function, $\theta_{FC} = \{W,b\}$ is the learnable parameters of FC. We regard $\theta = \{\theta_{\mathcal{M}},\theta_{FC}\}$ as a parameter set of true posterior estimator $f$ . Negative learning (Kim et al., 2019) is adopted, which not only considers "the token
88
+
89
+ belongs to positive label (candidate label $y \in \hat{\mathbf{y}}$ ) but also "the token does not belong to negative label (its complementary label $y \notin \hat{\mathbf{y}}$ )". The loss function is computed,
90
+
91
+ $$
92
+ \mathcal {L} (f (y; t, \mathbf {x}), y) = \left\{ \begin{array}{l l} - \log (f (y; t, \mathbf {x})), & y \in \hat {\mathbf {y}} \\ - \log (1 - f (y; t, \mathbf {x})), & y \notin \hat {\mathbf {y}} \end{array} \right. \tag {4}
93
+ $$
94
+
95
+ Finally, we optimize the empirical risk by integrating confidence $g(y; \hat{\mathbf{y}}, t, \mathbf{x})$ with the loss function (Equation 1). We will introduce the confidence $g(y; \hat{\mathbf{y}}, t, \mathbf{x})$ in detail below.
96
+
97
+ # 2.4 Confidence Estimator
98
+
99
+ The confidence estimator is used to learn the confidence scores $g(y; \hat{\mathbf{y}}, t, \mathbf{x})$ , which represents the confidence of label $y$ given the token $t$ , text sequence $\mathbf{x}$ , and partial label $\hat{\mathbf{y}}$ .
100
+
101
+ $$
102
+ g (y; \hat {\mathbf {y}}, t, \mathbf {x}) = \alpha * c _ {y; t, \mathbf {x}} ^ {A} + (1 - \alpha) * c _ {y; t, \mathbf {x}} ^ {M} \quad (5)
103
+ $$
104
+
105
+ where the confidence score $c_{y;t,\mathbf{x}}^{M}$ is learned by model and $c_{y;t,\mathbf{x}}^{A}$ is given by annotators. $\alpha$ is a hyper-parameter used to balance these two terms. The annotators will affect the quality of the datasets and we can calculate the prior confidence based on the labeled times of each class. However, prior confidence is biased since the annotators we selected have biases. To address this problem, we also let the model learn the posterior confidence to reduce the biases in prior confidence.
106
+
107
+ Posterior Confidence We update posterior confidence $c_{y;t,\mathbf{x}}^{M}$ based on true posterior distribution $P(Y = y|T = t, X = \mathbf{x})$ estimated by true posterior estimator $f(y; t, \mathbf{x})$ .
108
+
109
+ $$
110
+ c _ {y; t, \mathbf {x}} ^ {M} = \left\{ \begin{array}{l l} \frac {\exp (P (Y = y | T = t , X = \mathbf {x}))}{\sum_ {\hat {y} \in \hat {\mathbf {y}}} \exp (P (Y = \hat {y} | T = t , X = \mathbf {x}))}, & y \in \hat {\mathbf {y}} \\ \frac {\exp (P (Y = y | T = t , X = \mathbf {x}))}{\sum_ {\hat {y} \notin \hat {\mathbf {y}}} \exp (P (Y = \hat {y} | T = t , X = \mathbf {x}))}, & y \notin \hat {\mathbf {y}} \end{array} \right. \tag {6}
111
+ $$
112
+
113
+ We calculate the confidence score for positive and negative labels independently.
114
+
115
+ Prior Confidence We translate the labeled times a obtained from annotation into prior confidence $c_{y;t,\mathbf{x}}^{A}$
116
+
117
+ $$
118
+ c _ {y; t, \mathbf {x}} ^ {A} = \left\{ \begin{array}{l l} \frac {\exp (a)}{\sum_ {\tilde {a} \in \mathbf {a}} e x p (\tilde {a})}, & y \in \hat {\mathbf {y}} \\ 0, & y \notin \hat {\mathbf {y}} \end{array} \right. \tag {7}
119
+ $$
120
+
121
+ Note that both $c_{y;t,\mathbf{x}}^{M}$ and $c_{y;t,\mathbf{x}}^{A}$ are token and content dependent. The annotations are always affected by both the token self and the content of
122
+
123
+ <table><tr><td></td><td>#Sample</td><td>#TIME</td><td>#PLACE</td><td>#PERSON</td></tr><tr><td>Training</td><td>1000</td><td>6934</td><td>958</td><td>3518</td></tr><tr><td>Dev</td><td>440</td><td>955</td><td>147</td><td>351</td></tr><tr><td>Test</td><td>441</td><td>1015</td><td>171</td><td>356</td></tr></table>
124
+
125
+ Table 2: The statistical information of real-world dataset. #Sample means the number of samples in the corresponding dataset. #TIME, #PLACE and #PERSON represent the number of time, place, and person entities.
126
+
127
+ the token. Thus, we model the confidence by considering both the token and content. Finally, we compute the final confidence score $g(y; \hat{\mathbf{y}}, t, \mathbf{x})$ via Equation 5, which considers both biases from annotators and models.
128
+
129
+ We update the parameters $\theta$ and confidence score in the M step and E step of the EM algorithm. Specifically, we perform the true posterior estimator and confidence estimator iteratively. The initialization of $c_{y;t,\mathbf{x}}^{M}$ is $\frac{1}{|\hat{\mathbf{y}}|}$ for $y\in \hat{\mathbf{y}}$ and $\frac{1}{|\mathcal{V}| - |\hat{\mathbf{y}}|}$ for $y\notin \hat{\mathbf{y}}$ .
130
+
131
+ # 3 Experimental Setups
132
+
133
+ In this section, we first introduce one real-world and four synthetic datasets we adopted to evaluate the performance (Section 3.1). Then, we list the selected popular baselines to investigate the validity of our CPLL model (Section 3.2). Finally, we present the implementation details and metrics to replicate the experiment easily (Section 3.3).
134
+
135
+ # 3.1 Datasets
136
+
137
+ Real-World Dataset. To build the real-world dataset, we ask the annotators to label the person, place, and time in the text independently. Each sample is assigned to three annotators with guidelines and several examples. To be specific, we ask three students to label 1000 samples as the training set. The average Kappa value among the annotators is 0.215, indicating that the crowd annotators have low agreement on identifying entities in this data. In order to evaluate the system performances, we create a set of the corpus with gold annotations. Concretely, we randomly select 881 sentences from the raw dataset and let two experts generate the gold annotations. Among them, we use 440 sentences as the development set and the remaining 441 as the test set. Table 2 shows the statistical information of this dataset.
138
+
139
+ Synthetic Datasets. Inspired by (Rodrigues et al., 2014), we build synthetic datasets by adding
140
+
141
+ <table><tr><td></td><td>#Original</td><td>r</td><td>BI</td><td>#c</td><td>Error Percent</td></tr><tr><td rowspan="4">Weibo</td><td rowspan="4">4951</td><td>5%</td><td>35</td><td>134</td><td>3.4%</td></tr><tr><td>10%</td><td>143</td><td>546</td><td>13.9%</td></tr><tr><td>20%</td><td>494</td><td>1706</td><td>44.4%</td></tr><tr><td>25%</td><td>615</td><td>2411</td><td>61.0%</td></tr><tr><td rowspan="4">Resume</td><td rowspan="4">79014</td><td>5%</td><td>244</td><td>2011</td><td>2.8%</td></tr><tr><td>10%</td><td>920</td><td>7361</td><td>10.4%</td></tr><tr><td>20%</td><td>2979</td><td>25408</td><td>35.9%</td></tr><tr><td>25%</td><td>4145</td><td>37585</td><td>52.8%</td></tr><tr><td rowspan="4">Ontonotes</td><td rowspan="4">41203</td><td>5%</td><td>295</td><td>1246</td><td>3.7%</td></tr><tr><td>10%</td><td>978</td><td>4368</td><td>12.9%</td></tr><tr><td>20%</td><td>3151</td><td>14849</td><td>43.6%</td></tr><tr><td>25%</td><td>4420</td><td>20542</td><td>60.5%</td></tr><tr><td rowspan="4">MSRA</td><td rowspan="4">241809</td><td>5%</td><td>1439</td><td>6869</td><td>3.4%</td></tr><tr><td>10%</td><td>5115</td><td>26343</td><td>13.0%</td></tr><tr><td>20%</td><td>16729</td><td>86549</td><td>42.0%</td></tr><tr><td>25%</td><td>23163</td><td>120707</td><td>59.4%</td></tr></table>
142
+
143
+ Table 3: The statistical information of synthetic datasets. #Original means the number of the tokens labeled as an entity (not O) in the original dataset. BI/C means the number of tokens that have a wrong BI/Category label but the right Category/BI label. Percent $=$ (BI+C)/#Original.
144
+
145
+ noise on four typical NER datasets: MSRA (Levow, 2006), Weibo (Peng and Dredze, 2015), Ontonotes 4.0 (Weischedel et al., 2011) and Resume (Zhang and Yang, 2018). To simulate a real noise situation, we add noise to the original datasets using four rules: 1) BE (Bound Error) that adds or deletes some tokens of the entity to destroy the bound (change "room 1003" to "(room 1003"); 2) ME (Missing Error) that removes the entity from the label ("David" is not labeled); 3) CE (Category Error) that changes the category of the entity (change "Location" to "Organization"); 4) SE (Segmentation Error) that splits the entity into two entities (change "tomorrow at 10:00 am" to "tomorrow" and "at 10:00 am"). We run each rule randomly with a perturbation rate $r$ , which is set as $10\%$ in the experiments. Additionally, we explore the influence of annotation inconsistency with different rates. Table 3 shows statistical information of these datasets based on token-level majority voting. We can find that a large number of entities are perturbed by our rules. For example, more than $40\%$ tokens labeled as entities are perturbed with a perturbation rate $r$ of $20\%$ .
146
+
147
+ # 3.2 Baselines
148
+
149
+ To verify the effectiveness of our CPLL model, we compare it with several strong and typical baselines, which can be categorized into three groups: voting-based models, partial label learning-based models,
150
+
151
+ and annotator-based models.
152
+
153
+ - Voting-based models. We select two voting-based models, entity-level and token-level voting models. The entity-level voting model obtains the ground truth by voting at the entity level. The token-level voting model calculates the ground truth by voting at the token level. A BERT-based sequence labeling model (Kenton and Toutanova, 2019) is trained based on the ground truth calculated by voting.
154
+
155
+ - Partial label learning-based models. We adopt two classic PLL baselines to utilize the crowd-annotated data with multiple candidate labels. PRODEN-mlp (Lv et al., 2020) adopts a classifier-consistent risk estimator with a progressive identification method for PLL. Wen et al. (2021) propose a Leveraged Weighted (LW) loss for PLL to take the partial and non-partial labels into account, which is proved to be risk consistency. It achieved state-of-the-art results on various computer version tasks. We implement the models by translating the official codes to our NER task.
156
+
157
+ - Annotator-based models. After seeing researchers achieve great success in fully-supervised learning, we are easily going to think about how to gain fully-supervised data from crowd-annotated data when we use crowdsourcing. Seqcrowd (Nguyen et al., 2017) uses a crowd component, a Hidden Markov Model (HMM) learned by the Expectation-Maximization algorithm, to transform crowd-annotated data into fully-supervised data instead of simply voting at token-level or entity-level. When we get the ground truth calculated by this crowd component, we can adopt some efficient fully-supervised learning method to finish the corresponding task.
158
+
159
+ # 3.3 Implementation Details and Metrics
160
+
161
+ We adopt a PyTorch (Paszke et al., 2019) framework Transformers to implement our model based on GPU GTX TITAN X. Chinese-ROberta-wwm-ext model (Cui et al., 2019) is used for our true posterior estimator. We utilize Adam optimizer (Kingma and Ba, 2014) to update our model and set different learning rates for the BERT module (0.00002) and the rest module (0.002). The max sequence length
162
+
163
+ <table><tr><td colspan="2"></td><td>Real-World Dev</td><td>Test</td><td>Ontonotes Dev</td><td>Test</td><td>Weibo Dev</td><td>Test</td><td>Resume Dev</td><td>Test</td><td>MSRA Test</td></tr><tr><td>Ours</td><td>CPLL</td><td>90.37</td><td>90.60</td><td>79.39</td><td>81.47</td><td>69.72</td><td>68.23</td><td>96.57</td><td>96.07</td><td>95.42</td></tr><tr><td rowspan="2">Voting</td><td>Token-level</td><td>89.45</td><td>90.40</td><td>78.17</td><td>80.12</td><td>67.79</td><td>63.81</td><td>95.81</td><td>95.39</td><td>94.68</td></tr><tr><td>Entity-level</td><td>89.79</td><td>90.04</td><td>78.02</td><td>79.30</td><td>65.59</td><td>59.34</td><td>95.64</td><td>94.88</td><td>94.78</td></tr><tr><td rowspan="2">PLL</td><td>PRODEN-mlp</td><td>87.39</td><td>87.90</td><td>73.04</td><td>75.36</td><td>66.37</td><td>61.85</td><td>93.90</td><td>94.90</td><td>92.46</td></tr><tr><td>LW loss</td><td>88.80</td><td>89.83</td><td>79.07</td><td>80.45</td><td>69.63</td><td>64.26</td><td>96.37</td><td>95.64</td><td>95.35</td></tr><tr><td>Annotator</td><td>Seqcrowd</td><td>-</td><td>-</td><td>62.80</td><td>65.34</td><td>47.56</td><td>41.49</td><td>92.73</td><td>93.30</td><td>91.90</td></tr><tr><td>Upper Bound</td><td>Clean data</td><td>-</td><td>-</td><td>79.74</td><td>81.47</td><td>70.83</td><td>68.87</td><td>96.64</td><td>96.31</td><td>95.53</td></tr></table>
164
+
165
+ Table 4: The performance of our model and baselines in terms of F1. For real-world dataset, we do not report the results on clean data and Seqcrowd since we do not have ground truth for the training set.
166
+
167
+ <table><tr><td></td><td colspan="2">Real-World</td><td colspan="2">Ontonotes</td><td colspan="2">Weibo</td><td colspan="2">Resume</td><td>MSRA</td></tr><tr><td></td><td>Dev</td><td>Test</td><td>Dev</td><td>Test</td><td>Dev</td><td>Test</td><td>Dev</td><td>Test</td><td>Test</td></tr><tr><td>CPLL</td><td>90.37</td><td>90.60</td><td>79.39</td><td>81.47</td><td>69.72</td><td>68.23</td><td>96.57</td><td>96.07</td><td>95.42</td></tr><tr><td>w/o Posterior Confidence</td><td>89.51</td><td>90.08</td><td>79.11</td><td>80.42</td><td>68.83</td><td>65.84</td><td>95.74</td><td>95.38</td><td>94.79</td></tr><tr><td>w/o Prior Confidence</td><td>90.60</td><td>90.94</td><td>79.68</td><td>80.87</td><td>70.57</td><td>64.90</td><td>96.21</td><td>95.70</td><td>95.20</td></tr><tr><td>w/o Both</td><td>86.73</td><td>86.32</td><td>78.66</td><td>80.22</td><td>67.33</td><td>61.59</td><td>95.72</td><td>95.23</td><td>94.61</td></tr></table>
168
+
169
+ Table 5: The performance of ablation studies.
170
+
171
+ is 512, the batch size is 8 and the dropout rate is 0.1. We search the best $\alpha$ from 0.1 to 0.9 with step 0.1 using the development set. All the baselines use the same settings hyper-parameters mentioned in their paper. Our source code will be available soon after this paper is accepted.
172
+
173
+ To measure the performance of the models, we adopt Macro-F1 as the metric, which is widely used for NER (Yadav and Bethard, 2018). In particular, we evaluate the performance on the span level, where the answer will be considered correct only when the entire span is matched.
174
+
175
+ # 4 Experimental Results
176
+
177
+ In this section, we conduct a series of experiments to investigate the effectiveness of the proposed CPLL model. Specifically, we compare our model with three kinds of strong baselines (Section 4.1) and do ablation studies to explore the influence of the key parts contained in CPLL (Section 4.2). Also, we investigate the influence of annotation inconsistency (Section 4.3) and hyper-parameter $\alpha$ , which controls the balance of posterior confidence and prior confidence (Section 4.4).
178
+
179
+ # 4.1 Main Results
180
+
181
+ To evaluate the performance of our model, we present the results of compared baselines and our CPLL model (See Table 4). First, we can find that our model outperforms all the baselines on both
182
+
183
+ the real-world and synthetic datasets. The labels obtained by voting-based methods (e.g., Token-level voting and entity-level voting) always contain much noise because of the large labeling space and the complexity of this task. For PLL-based models (e.g., PRODEN-mlp and LW loss), they ignore the labeled times by the annotators. Furthermore, annotator-based methods (e.g., Seqcrowd) aim to find the trustworthy label or annotator. Note that Seqcrow does not work on Weibo and performs poorly on Ontonotes. It is because Seqcrow cannot solve the case of small sizes or large noise of datasets, which is also verified in Section 2. All these methods cause information loss which affects the performance of the models largely. Our CPLL model makes use of the crowd-annotated data by translating this task into a PLL task to integrate confidence. Second, our CPLL model can reduce the influence of noise effectively. From the results, we observe that CPLL obtains comparable results with the model trained on the clean data. Our confidence estimator can learn the bias generated by annotations effectively via the posterior and prior confidence.
184
+
185
+ # 4.2 Ablation Studies
186
+
187
+ To evaluate the effectiveness of each part contained in our model, we do ablation studies (See Table 5). We remove posterior confidence (w/o Posterior Confidence), prior confidence (w/o Prior Confi
188
+
189
+ ![](images/fc6c21b82d39df1c9513501038e68003c1dde5993303bfbf65c79ea902ae6f23.jpg)
190
+ (a) Weibo
191
+
192
+ ![](images/4d1c12532177fede0a39bbde6c689a0e890a6ae38bc168e95a1e87f18a38ce25.jpg)
193
+ (b)Resume
194
+
195
+ ![](images/50d996c24e99c262018cf9a1ec71af65a0ae987406b7de942bf12b0345610344.jpg)
196
+ (c) Ontonotes
197
+ Figure 2: The influence of annotation inconsistency.
198
+
199
+ dence), and both of them (w/o Both) from CPLL model. For w/o Both, we remove the confidence estimator by setting the confidences as $1 / |\hat{\mathbf{y}}|$ for partial labels and 0 for non-partial labels.
200
+
201
+ From the results, we find the following observations. 1) Confidence estimator can learn the annotation bias effectively. Removing it (w/o Both) reduces more than 4 points in terms of F1 on the test sets over real-world and Weibo datasets. 2) Both posterior confidence and prior confidence are useful for this task. Obviously, prior confidence is vital to leverage the labeled confidence given by annotators. However, prior confidence may exist bias since the annotators are limited. Thus, the posterior confidence learned by the model is also crucial for partial label learning to rectify the prediction.
202
+
203
+ # 4.3 Influence of Annotation Inconsistency
204
+
205
+ We also explore the influence of annotation inconsistency on synthetic datasets with various perturbation rates. Annotation inconsistency is used to model the label quality of crowd-sourcing. The bigger the perturbation rate, the worse the quality of the annotation. We report the results with a rate from $5\%$ to $25\%$ with step $5\%$ over Weibo, Resume, and Ontonotes datasets (Figure 2).
206
+
207
+ First, our CPLL model outperforms all the baselines with different perturbation rates. Moreover, the higher the annotation inconsistency, the more our model improves relative to the baselines. Our model can reduce the influence of annotation inconsistency more effectively. Second, several baselines almost do not work with a large perturbation rate (e.g., $25\%$ ), while our model can handle it effectively. The F1 score of Seqcrowd is only less than 20 when the rate $r$ is larger than $20\%$ . Third, it is obvious that the annotation quality will affect the performance of the model largely. The higher the inconsistency, the worse the quality of the annotation and the worse the performance of the model.
208
+
209
+ # 4.4 Influence of Hyper-parameter $\alpha$
210
+
211
+ We further investigate the influence of the hyperparameter $\alpha$ (in Equation 5), which is used to balance the posterior and prior confidence (Figure 3). The prior confidence demonstrates the labeled confidence given by the annotators, which is biased due to the selection of annotators. To reduce this bias, we enhance our model to estimate the posterior confidence that is learned by the model.
212
+
213
+ From the figures, we can observe the following observations. First, when the noise is high, the smaller the $\alpha$ , the better the performance. Intuitively, the confidence given by annotators is not reliable when the perturbation rate $r$ is large. Second, when the noise is low, the trend that the larger the $\alpha$ , the better the performance is relatively not as obvious. The reason is that the model can disambiguate the ground truth from the candidates easily since the data is clear. Most of the labels are correct and confidence is not important at this time. All the findings indicate that our confidence estimator can make use of prior confidence and learn posterior confidence effectively.
214
+
215
+ # 5 Related Work
216
+
217
+ In this section, we mainly review the most related works about named entity recognition (Section 5.1) and partial label learning (Section 5.2).
218
+
219
+ # 5.1 Named Entity Recognition
220
+
221
+ Named Entity Recognition (NER) is a research hotspot since it can be applied to many downstream Natural language Processing (NLP) tasks. A well-trained NER model takes language sequence as input and marks out all the entities in the sequence with the correct entity type. NER is widely treated as a sequence labeling problem, a token-level tagging task (Chiu and Nichols, 2015; Akbik et al., 2018; Yan et al., 2019). Also, some of the re
222
+
223
+ ![](images/e44a579cf6cd3455f8f95db33b3339fb598503fde02cc99d42278c77454d10ff.jpg)
224
+ (a) Weibo
225
+
226
+ ![](images/e01e4b9e6925cd3f162f34a5c3c8e50e90c9cb4766a3e8a71df886139f34937f.jpg)
227
+ (b)Resume
228
+ Figure 3: The influence of hyper-parameter $\alpha$ , which is leveraged to control the balance between the posterior and prior confidence.
229
+
230
+ ![](images/2d3b22879db84e2f0704ab24b9b15e8629e94e2073ca0864cab2a90a2fcf9d59.jpg)
231
+ (c) Ontonotes
232
+
233
+ searchers regard NER as a span-level classification task (Xue et al., 2020; Fu et al., 2021; Alemi et al., 2023). In these works, NER is a fully-supervised learning task based on large-scale labeled data, where each token is asserted with a golden label.
234
+
235
+ Crowdsourcing platforms (e.g., Amazon Mechanical Turk) are a popular way to obtain large labeled data. Due to the large label space and complexity of NER, the quality of labeled data is low. The ground truth obtained by simple majority voting contains a lot of noise, which limits the performance of the model largely. There is some literature that trains the model from multiple annotators directly (Simpson and Gurevych, 2019; Nguyen et al., 2017). They mainly focus on modeling the differences among annotators to find a trustworthy annotator. In fact, a sentence may not be correctly labeled by all the annotators while they all may label part of the right entities. To address this problem, we translate this task into a partial label learning problem with a prior confidence score.
236
+
237
+ # 5.2 Partial Label Learning
238
+
239
+ Unlike fully-supervised learning, which uses data with golden label $\mathbf{y}$ , Partial Label Learning (PLL) asserts a candidate set $\mathcal{V}$ for each input $\mathbf{x}$ (Zhang et al., 2016; Wang et al., 2023; Lv et al., 2020). Despite the fact that we can not ensure golden label $\mathbf{y}$ always in the candidate set $\mathcal{V}$ , most PLL researchers assume one of the candidate labels is the golden label for simplicity. The existing studies about PLL can be categorized into two groups, average-based methods (Zhang and Yu, 2015) and identification-based methods (Jin and Ghahramani, 2002; Lyu et al., 2019). Average-based methods (Zhang and Yu, 2015; Hüllermeier and Beringer, 2006) intuitively treat the candidate labels with
240
+
241
+ equal importance. The main weakness of these algorithms is that the false positive may severely distract the model with wrong label information. Recently, identification-based methods (Jin and Ghahramani, 2002; Wang et al., 2023) are proposed to identify the truth label from the candidates by regarding the ground truth as a latent variable. More and more literature pays attention to representative methods (Lyu et al., 2019; Nguyen and Caruana, 2008), self-training methods (Wen et al., 2021), loss function adjustments (Wu and Zhang, 2018).
242
+
243
+ However, most of the current work focuses on image classification or text classification tasks, while how to model the confidence for NER is not well studied. The sequence labeling task aims to identify the entities in the sentence with an entity type in the token level. Thus, how to model the token self and its content also plays an important role in this task. To address this problem, we design a confidence estimator to predict the token- and content-dependent confidence based on the prior confidence given by annotators.
244
+
245
+ # 6 Conclusion and Future Work
246
+
247
+ In this paper, we translate crowd-annotated NER into a PLL problem and propose a CPLL model based on an EM algorithm. To rectify the model's prediction, we design a confidence estimator to predict token- and content-dependent confidence by incorporating prior confidence with posterior confidence. We conduct the experiments on one real-world dataset and four synthetic datasets to evaluate the performance of our proposed CPLL model by comparing it with several state-of-the-art baselines. Moreover, we do ablation studies to verify the effectiveness of the key components and explore the influence of annotation inconsistency.
248
+
249
+ In the future, we would like to investigate the performance of our model on other sequence labeling tasks.
250
+
251
+ # Limitations
252
+
253
+ Although our work shows that our CPLL model can learn from crowd-annotated NER data well, there are at least two limitations. First, we set the hyperparameter $\alpha$ manually. It would be better if we could design a strategy to learn a alpha adaptive value for each sample atomically. Second, though we mainly experiment on NER tasks, our model can be applied to all sequence labeling tasks, such as part-of-speech tagging (POS), Chinese word segmentation, and so on. We would like to explore it in further work.
254
+
255
+ # Acknowledgements
256
+
257
+ The authors wish to thank the anonymous reviewers for their helpful comments. This work was partially funded by National Natural Science Foundation of China (No.62206057), Shanghai Rising-Star Program (23QA1400200), Natural Science Foundation of Shanghai (23ZR1403500), and CCF-Tencent Open Fund.
258
+
259
+ # References
260
+
261
+ Alan Akbik, Duncan A. J. Blythe, and Roland Vollgraf. 2018. Contextual string embeddings for sequence labeling. international conference on computational linguistics.
262
+ Alexander Alemi, Ian Fischer, Joshua Dillon, Jacob Devlin, Ming-Wei Chang, Kenton Lee, Marco Federici, Anjan Dutta, Patrick Forre, Nate Kush, Robert Geirhos, Jorn-Henrik Jacobsen, Richard Michaelis, and Wieland Zemel. 2023. Miner: Improving out-of-vocabulary named entity recognition from an information theoretic perspective. meeting of the association for computational linguistics.
263
+ Nguyen Bach and Sameer Badaskar. 2007. A review of relation extraction. Literature review for Language and Statistics II, 2:1-15.
264
+ Jason P.C. Chiu and Eric Nichols. 2015. Named entity recognition with bidirectional lstm-cnns. Transactions of the Association for Computational Linguistics.
265
+ Timothy Cour, Ben Sapp, and Ben Taskar. 2011. Learning from partial labels. The Journal of Machine Learning Research, 12:1501-1536.
266
+
267
+ Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Ziqing Yang, Shijin Wang, and Guoping Hu. 2019. Pretraining with whole word masking for chinese bert. arXiv: Computation and Language.
268
+ Arthur P Dempster, Nan M Laird, and Donald B Rubin. 1977. Maximum likelihood from incomplete data via the em algorithm. Journal of the Royal Statistical Society: Series B (Methodological), 39(1):1-22.
269
+ Lei Feng and Bo An. 2019. Partial label learning with self-guided retraining. In Proceedings of the AAAI conference on artificial intelligence, volume 33, pages 3542-3549.
270
+ Lei Feng, Jiaqi Lv, Bo Han, Miao Xu, Gang Niu, Xin Geng, Bo An, and Masashi Sugiyama. 2020. Provably consistent partial-label learning. Advances in Neural Information Processing Systems, 33:10948-10960.
271
+ Jinlan Fu, Xuanjing Huang, and Pengfei Liu. 2021. Spanner: Named entity re-/recognition as span prediction. meeting of the association for computational linguistics.
272
+ Eyke Hüllermeier and Jürgen Beringer. 2006. Learning from ambiguously labeled examples. Intelligent Data Analysis, 10(5):419-439.
273
+ Rong Jin and Zoubin Ghahramani. 2002. Learning with multiple labels. Advances in neural information processing systems, 15.
274
+ Jacob Devlin Ming-Wei Chang Kenton and Lee Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of NAACL-HLT, pages 4171-4186.
275
+ Youngdong Kim, Junho Yim, Juseung Yun, and Junmo Kim. 2019. NInl: Negative learning for noisy labels. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 101-110.
276
+ Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.
277
+ Gina-Anne Levow. 2006. The third international Chinese language processing bakeoff: Word segmentation and named entity recognition. In Proceedings of the Fifth SIGHAN Workshop on Chinese Language Processing, pages 108-117.
278
+ Jiaqi Lv, Miao Xu, Lei Feng, Gang Niu, Xin Geng, and Masashi Sugiyama. 2020. Progressive identification of true labels for partial-label learning. In International Conference on Machine Learning, pages 6500-6510. PMLR.
279
+ Gengyu Lyu, Songhe Feng, Tao Wang, Congyan Lang, and Yidong Li. 2019. Gm-pll: graph matching based partial label learning. IEEE Transactions on Knowledge and Data Engineering, 33(2):521-535.
280
+
281
+ An T Nguyen, Byron C Wallace, Junyi Jessy Li, An Nenkova, and Matthew Lease. 2017. Aggregating and predicting sequence labels from crowd annotations. In Proceedings of the conference. Association for Computational Linguistics. Meeting, volume 2017, page 299. NIH Public Access.
282
+ Nam Nguyen and Rich Caruana. 2008. Classification with partial labels. In Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 551-559.
283
+ Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Z. Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. Pytorch: An imperative style, high-performance deep learning library. neural information processing systems.
284
+ Nanyun Peng and Mark Dredze. 2015. Named entity recognition for chinese social media with jointly trained embeddings. In Proceedings of the 2015 conference on empirical methods in natural language processing, pages 548-554.
285
+ Filipe Rodrigues, Francisco Pereira, and Bernardete Ribeiro. 2014. Sequence labeling with multiple annotators. Machine learning, 95(2):165-181.
286
+ Edwin Simpson and Iryna Gurevych. 2019. A Bayesian approach for sequence tagging with crowds. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1093-1104, Hong Kong, China. Association for Computational Linguistics.
287
+ David Wadden, Ulme Wennberg, Yi Luan, and Hannaneh Hajishirzi. 2019. Entity, relation, and event extraction with contextualized span representations. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5784-5789.
288
+ Haobo Wang, Ruixuan Xiao, Yixuan Li, Lei Feng, Gang Niu, Gang Chen, and Junbo Zhao. 2023. Pico: Contrastive label disambiguation for partial label learning. Learning.
289
+ Ralph Weischedel, Sameer Pradhan, Lance Ramshaw, Martha Palmer, Nianwen Xue, Mitchell Marcus, Ann Taylor, Craig Greenberg, Eduard Hovy, Robert Belvin, et al. 2011. Ontonotes release 4.0. LDC2011T03, Philadelphia, Penn.: Linguistic Data Consortium.
290
+ Hongwei Wen, Jingyi Cui, Hanyuan Hang, Jiabin Liu, Yisen Wang, and Zhouchen Lin. 2021. Leveraged weighted loss for partial label learning. In International Conference on Machine Learning, pages 11091-11100. PMLR.
291
+
292
+ Xuan Wu and Min-Ling Zhang. 2018. Towards enabling binary decomposition for partial label learning. In *IJCAI*, pages 2868-2874.
293
+ Mengge Xue, Bowen Yu, Zhenyu Zhang, Tingwen Liu, Yue Zhang, and Bin Wang. 2020. Coarse-to-fine pre-training for named entity recognition. empirical methods in natural language processing.
294
+ Vikas Yadav and Steven Bethard. 2018. A survey on recent advances in named entity recognition from deep learning models. In 27th International Conference on Computational Linguistics, COLING 2018, pages 2145-2158. Association for Computational Linguistics (ACL).
295
+ Hang Yan, Bocao Deng, Xiaonan Li, and Xipeng Qiu 2019. Tener: Adapting transformer encoder for named entity recognition. arXiv: Computation and Language.
296
+ Yan Yan and Yuhong Guo. 2020. Partial label learning with batch label correction. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 6575-6582.
297
+ YaoSheng Yang, Meishan Zhang, Wenliang Chen, Wei Zhang, Haofen Wang, and Min Zhang. 2018. Adversarial learning for chinese ner from crowd annotations. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 32.
298
+ Min-Ling Zhang and Fei Yu. 2015. Solving the partial label learning problem: An instance-based approach In Twenty-fourth international joint conference on artificial intelligence.
299
+ Min-Ling Zhang, Bin-Bin Zhou, and Xu-Ying Liu. 2016. Partial label learning via feature-aware disambiguation. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1335-1344.
300
+ Yue Zhang and Jie Yang. 2018. Chinese ner using lattice LSTM. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1554-1564.
301
+ Jie Zhou, Qi Zhang, Qin Chen, Liang He, and Xuan-Jing Huang. 2022. A multi-format transfer learning model for event argument extraction via variational information bottleneck. In Proceedings of the 29th International Conference on Computational Linguistics, pages 1990-2000.
302
+
303
+ A For every submission:
304
+
305
+ A1. Did you describe the limitations of your work?
306
+
307
+ Section Limitations
308
+
309
+ A2. Did you discuss any potential risks of your work?
310
+
311
+ Our work does not have any potential risks.
312
+
313
+ A3. Do the abstract and introduction summarize the paper's main claims?
314
+
315
+ Abstract and Section 1. Introduction
316
+
317
+ A4. Have you used AI writing assistants when working on this paper?
318
+
319
+ Left blank.
320
+
321
+ B Did you use or create scientific artifacts?
322
+
323
+ Left blank.
324
+
325
+ B1. Did you cite the creators of artifacts you used?
326
+
327
+ No response.
328
+
329
+ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
330
+
331
+ No response.
332
+
333
+ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
334
+
335
+ No response.
336
+
337
+ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
338
+
339
+ No response.
340
+
341
+ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
342
+
343
+ No response.
344
+
345
+ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
346
+
347
+ No response.
348
+
349
+ C Did you run computational experiments?
350
+
351
+ 3.3 Implementation Details and Metrics
352
+
353
+ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used?
354
+
355
+ 3.3 Implementation Details and Metrics
356
+
357
+ The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
358
+
359
+ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
360
+
361
+ 3.3 Implementation Details and Metrics
362
+
363
+ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
364
+
365
+ We run our model using the same seed and select the best based on the development set.
366
+
367
+ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)?
368
+
369
+ Not applicable. Left blank.
370
+
371
+ D Did you use human annotators (e.g., crowdworkers) or research with human participants?
372
+
373
+ 3.1 Datasets
374
+
375
+ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
376
+ 3.1 Datasets
377
+ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)?
378
+ 3.1 Datasets
379
+ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
380
+ 3.1 Datasets
381
+ D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Not applicable. Left blank.
382
+ D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
383
+
384
+ Not applicable. Left blank.
2023/A Confidence-based Partial Label Learning Model for Crowd-Annotated Named Entity Recognition/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c957830b914be17161a088a997f90a52c529d13bc67f87d546df41435f0de1df
3
+ size 470961
2023/A Confidence-based Partial Label Learning Model for Crowd-Annotated Named Entity Recognition/layout.json ADDED
The diff for this file is too large to render. See raw diff
 
2023/A Customized Text Sanitization Mechanism with Differential Privacy/cecb5923-3636-44eb-8dc9-ff69d69aa2f0_content_list.json ADDED
@@ -0,0 +1,1847 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "type": "text",
4
+ "text": "A Customized Text Sanitization Mechanism with Differential Privacy",
5
+ "text_level": 1,
6
+ "bbox": [
7
+ 139,
8
+ 90,
9
+ 855,
10
+ 112
11
+ ],
12
+ "page_idx": 0
13
+ },
14
+ {
15
+ "type": "text",
16
+ "text": "Huimin Chen $^{1*}$ , Fengran Mo $^{2*}$ , Yanhao Wang $^{1}$ , Cen Chen $^{1\\dagger}$ , Jian-Yun Nie $^{2}$ , Chengyu Wang $^{3}$ , Jamie Cui $^{4}$",
17
+ "bbox": [
18
+ 233,
19
+ 124,
20
+ 768,
21
+ 158
22
+ ],
23
+ "page_idx": 0
24
+ },
25
+ {
26
+ "type": "text",
27
+ "text": "<sup>1</sup>East China Normal University <sup>2</sup>Université de Montréal <sup>3</sup>Alibaba Group <sup>4</sup>Ant Group",
28
+ "bbox": [
29
+ 127,
30
+ 159,
31
+ 873,
32
+ 175
33
+ ],
34
+ "page_idx": 0
35
+ },
36
+ {
37
+ "type": "text",
38
+ "text": "saichen@stu.ecnu.edu.cn, fengran.mo@umontreal.ca",
39
+ "bbox": [
40
+ 211,
41
+ 177,
42
+ 789,
43
+ 191
44
+ ],
45
+ "page_idx": 0
46
+ },
47
+ {
48
+ "type": "text",
49
+ "text": "{yhwang, cenchen}@dase.ecnu.edu.cn, Nie@iro.umontreal.ca",
50
+ "bbox": [
51
+ 171,
52
+ 193,
53
+ 831,
54
+ 209
55
+ ],
56
+ "page_idx": 0
57
+ },
58
+ {
59
+ "type": "text",
60
+ "text": "chywang2013@gmail.com, jamie.cui@outlook.com",
61
+ "bbox": [
62
+ 235,
63
+ 210,
64
+ 766,
65
+ 225
66
+ ],
67
+ "page_idx": 0
68
+ },
69
+ {
70
+ "type": "text",
71
+ "text": "Abstract",
72
+ "text_level": 1,
73
+ "bbox": [
74
+ 260,
75
+ 252,
76
+ 339,
77
+ 266
78
+ ],
79
+ "page_idx": 0
80
+ },
81
+ {
82
+ "type": "text",
83
+ "text": "As privacy issues are receiving increasing attention within the Natural Language Processing (NLP) community, numerous methods have been proposed to sanitize texts subject to differential privacy. However, the state-of-the-art text sanitization mechanisms based on metric local differential privacy (MLDP) do not apply to non-metric semantic similarity measures and cannot achieve good trade-offs between privacy and utility. To address the above limitations, we propose a novel Customized Text (CusText) sanitization mechanism based on the original $\\epsilon$ -differential privacy (DP) definition, which is compatible with any similarity measure. Furthermore, CusText assigns each input token a customized output set of tokens to provide more advanced privacy protection at the token level. Extensive experiments on several benchmark datasets show that CusText achieves a better trade-off between privacy and utility than existing mechanisms. The code is available at https://github.com/sai4july/CusText.",
84
+ "bbox": [
85
+ 141,
86
+ 278,
87
+ 460,
88
+ 604
89
+ ],
90
+ "page_idx": 0
91
+ },
92
+ {
93
+ "type": "text",
94
+ "text": "1 Introduction",
95
+ "text_level": 1,
96
+ "bbox": [
97
+ 114,
98
+ 614,
99
+ 258,
100
+ 629
101
+ ],
102
+ "page_idx": 0
103
+ },
104
+ {
105
+ "type": "text",
106
+ "text": "In many Natural Language Processing (NLP) applications, input texts often contain sensitive information that can infer the identity of specific persons (Jegorova et al., 2021), leading to potential privacy leakage that impedes privacy-conscious users from releasing data to service providers (Carlini et al., 2019, 2021; Song and Raghunathan, 2020). Moreover, legal restrictions such as $\\mathrm{CCPA}^1$ and $\\mathrm{GDPR}^2$ may further limit the sharing of sensitive textual data. This makes NLP service providers difficult to collect training data unless the privacy concerns of data owners, including individuals and institutions, are well discussed.",
107
+ "bbox": [
108
+ 112,
109
+ 639,
110
+ 489,
111
+ 847
112
+ ],
113
+ "page_idx": 0
114
+ },
115
+ {
116
+ "type": "image",
117
+ "img_path": "images/387a9bee705680d4db12453183db61d0839fcd0c6388397b70380274774e10ba.jpg",
118
+ "image_caption": [
119
+ "Figure 1: A privacy-preserving NLP workflow."
120
+ ],
121
+ "image_footnote": [],
122
+ "bbox": [
123
+ 510,
124
+ 253,
125
+ 882,
126
+ 386
127
+ ],
128
+ "page_idx": 0
129
+ },
130
+ {
131
+ "type": "text",
132
+ "text": "To address such privacy issues, great efforts (Lyu et al., 2020; Anil et al., 2022; Dupuy et al., 2022; Li et al., 2022; Mireshghallah et al., 2021) have been made to train language models (LMs) with differential privacy (Dwork et al., 2006) (DP), which has been regarded as the de facto standard for privacy-preserving computation. These approaches mainly focus on adding calibrated noise to gradients or text representations during the training phase so that sensitive user data cannot be inferred from trained LMs. Nevertheless, they require service providers to collect the original data for LM training. As such, data owners may still have privacy concerns when service providers are not fully trusted.",
133
+ "bbox": [
134
+ 507,
135
+ 448,
136
+ 884,
137
+ 673
138
+ ],
139
+ "page_idx": 0
140
+ },
141
+ {
142
+ "type": "text",
143
+ "text": "To solve the privacy problem from the root, a common paradigm is to let data owners sanitize their data locally before releasing them to the service provider, as shown in Figure 1. Generally, such privatization mechanisms (Feyisetan et al., 2019, 2020; Yue et al., 2021) generate a sanitized text document by replacing the original tokens (e.g., characters, words, or $n$ -grams) in the original document sequentially with new tokens sampled from output token sets. Specifically, they adopt the Metric Local Differential Privacy (Chatzikokolakis et al., 2013) (MLDP, also known as $d_{\\chi}$ -privacy), a relaxation of the original DP definition, to provide the privacy and utility guarantees simultaneously. On the one hand, MLDP inherits the idea of DP",
144
+ "bbox": [
145
+ 507,
146
+ 677,
147
+ 882,
148
+ 917
149
+ ],
150
+ "page_idx": 0
151
+ },
152
+ {
153
+ "type": "page_footnote",
154
+ "text": "*Equal contribution.",
155
+ "bbox": [
156
+ 139,
157
+ 854,
158
+ 272,
159
+ 866
160
+ ],
161
+ "page_idx": 0
162
+ },
163
+ {
164
+ "type": "page_footnote",
165
+ "text": "† Corresponding author.",
166
+ "bbox": [
167
+ 142,
168
+ 868,
169
+ 289,
170
+ 879
171
+ ],
172
+ "page_idx": 0
173
+ },
174
+ {
175
+ "type": "page_footnote",
176
+ "text": "<https://oag.ca.gov/privacy/ccpa>",
177
+ "bbox": [
178
+ 137,
179
+ 879,
180
+ 425,
181
+ 892
182
+ ],
183
+ "page_idx": 0
184
+ },
185
+ {
186
+ "type": "page_footnote",
187
+ "text": "$^{2}$ https://data.europa.eu/eli/reg/2016/679/oj",
188
+ "bbox": [
189
+ 117,
190
+ 894,
191
+ 470,
192
+ 917
193
+ ],
194
+ "page_idx": 0
195
+ },
196
+ {
197
+ "type": "page_number",
198
+ "text": "5747",
199
+ "bbox": [
200
+ 478,
201
+ 927,
202
+ 519,
203
+ 940
204
+ ],
205
+ "page_idx": 0
206
+ },
207
+ {
208
+ "type": "footer",
209
+ "text": "Findings of the Association for Computational Linguistics: ACL 2023, pages 5747-5758",
210
+ "bbox": [
211
+ 228,
212
+ 945,
213
+ 768,
214
+ 958
215
+ ],
216
+ "page_idx": 0
217
+ },
218
+ {
219
+ "type": "footer",
220
+ "text": "July 9-14, 2023 ©2023 Association for Computational Linguistics",
221
+ "bbox": [
222
+ 295,
223
+ 959,
224
+ 700,
225
+ 971
226
+ ],
227
+ "page_idx": 0
228
+ },
229
+ {
230
+ "type": "text",
231
+ "text": "to ensure that the outputs of any adjacent input tokens are indistinguishable to protect the original tokens from being inferred. On the other hand, MLDP also preserves the utility of sanitized texts by assigning higher sampling probabilities to tokens that are semantically closer to the original ones. In these mechanisms, any metric distance (e.g., Euclidean distance) can be used to measure the semantic similarities between tokens.",
232
+ "bbox": [
233
+ 112,
234
+ 84,
235
+ 492,
236
+ 229
237
+ ],
238
+ "page_idx": 1
239
+ },
240
+ {
241
+ "type": "text",
242
+ "text": "However, the above text sanitization mechanisms suffer from two inherent limitations. First, since MLDP is specific for metric distances satisfying the triangle inequality, they do not apply to non-metric semantic similarity measures in NLP applications such as cosine similarity (Mrksic et al., 2016) and TF-IDF (Salton and Buckley, 1988). Second, they cannot achieve good privacy-utility trade-offs, i.e., either having high privacy costs with insufficient protections or resulting in low accuracy of models trained on sanitized data. We observe that the low accuracy arises as they treat each token in the text equally by assigning each input token with the same output set, which can be excessively large (e.g., the size of the output set is over 80,000). Such a huge output set leads to high costs for MLDP and thus impedes the model's utility when the privacy budget is tight.",
243
+ "bbox": [
244
+ 115,
245
+ 236,
246
+ 490,
247
+ 525
248
+ ],
249
+ "page_idx": 1
250
+ },
251
+ {
252
+ "type": "text",
253
+ "text": "To this end, we propose a novel Customized Text (CusText) sanitization mechanism that provides more advanced privacy protection at the token level. Specifically, to generalize CusText to all similarity measures, we turn to a mechanism that satisfies the original $\\epsilon$ -Differential Privacy ( $\\epsilon$ -DP), i.e., Exponential Mechanism (EM) (McSherry and Talwar, 2007), to sample the output for each input token. Meanwhile, we inherit the merit of MLDP by designing an appropriate scoring function for EM to take into account the semantic similarities between tokens for sampling. Then, to achieve a better trade-off between privacy and utility, we design a mapping scheme to assign each input token a customized output set of a much smaller size for token-level privacy protection. Here, we can adjust a customized parameter $K$ that determines the size of the output set for each input token for different utility-privacy trade-offs. Using the mapping scheme, we exclude most of the tokens that are semantically irrelevant to the input token from consideration and reduce the privacy costs caused by excessive output set sizes. As the privacy risks of some tokens, e.g., stopwords, are low in practice,",
254
+ "bbox": [
255
+ 115,
256
+ 533,
257
+ 489,
258
+ 917
259
+ ],
260
+ "page_idx": 1
261
+ },
262
+ {
263
+ "type": "text",
264
+ "text": "we further propose an improved CusText+ mechanism that skips the stopwords in the sampling process to achieve higher utility without incurring greater privacy losses.",
265
+ "bbox": [
266
+ 507,
267
+ 84,
268
+ 884,
269
+ 148
270
+ ],
271
+ "page_idx": 1
272
+ },
273
+ {
274
+ "type": "text",
275
+ "text": "Finally, we conduct extensive experiments on three benchmark datasets to demonstrate that CusText achieves better privacy-utility trade-offs than the state-of-the-art text sanitization mechanisms in (Feyisetan et al., 2020; Yue et al., 2021). More particularly, with the same privacy parameter $\\epsilon$ , the models trained on texts sanitized by CusText have significantly higher accuracy rates than those sanitized by SANTEXT (Yue et al., 2021). Furthermore, when the utilities of models are comparable, CusText provides better protection against two token inference attacks than SANTEXT.",
276
+ "bbox": [
277
+ 507,
278
+ 149,
279
+ 885,
280
+ 341
281
+ ],
282
+ "page_idx": 1
283
+ },
284
+ {
285
+ "type": "text",
286
+ "text": "2 Related Work",
287
+ "text_level": 1,
288
+ "bbox": [
289
+ 507,
290
+ 357,
291
+ 665,
292
+ 373
293
+ ],
294
+ "page_idx": 1
295
+ },
296
+ {
297
+ "type": "text",
298
+ "text": "There have been numerous studies on the vulnerability of deep learning models (Carlini et al., 2019; Song and Raghunathan, 2020), including language models (Carlini et al., 2021; Zhao and Chen, 2022) (LMs), against privacy attacks. In particular, such attacks can recover sensitive user attributes or raw texts from trained models. Therefore, incorporating privacy mechanisms with rigorous guarantees is vital to protect LMs from privacy attacks.",
299
+ "bbox": [
300
+ 507,
301
+ 386,
302
+ 884,
303
+ 530
304
+ ],
305
+ "page_idx": 1
306
+ },
307
+ {
308
+ "type": "text",
309
+ "text": "A few attempts at applying anonymization techniques for i.i.d. data (Li et al., 2007; Machanavajhala et al., 2007) fail to provide strong privacy protection for textual data (Zhao and Chen, 2022). Then, many efforts (Lyu et al., 2020; Anil et al., 2022; Dupuy et al., 2022; Hessel and Schofield, 2021; Li et al., 2022; Mireshghallah et al., 2021) have been made to preserve the utility of LMs on textual data with provable differential privacy (DP) guarantees. Following the application of DP in deep learning (Abadi et al., 2016), they mainly focus on adding calibrated noise to gradients or text representations during the training phase for both utility and privacy. However, they need a trustworthy server to collect original texts from data owners for model training and thus cannot be applied to the scenario without trusted servers.",
310
+ "bbox": [
311
+ 507,
312
+ 532,
313
+ 885,
314
+ 803
315
+ ],
316
+ "page_idx": 1
317
+ },
318
+ {
319
+ "type": "text",
320
+ "text": "To address privacy issues from the root, different (customized) local differential privacy (Duchi et al., 2013; Chatzikokolakis et al., 2013) (LDP) mechanisms have been proposed to allow data owners to sanitize their data locally before releasing them to the server. Due to the high dimensionality and complicated features of textual data, compared with",
321
+ "bbox": [
322
+ 507,
323
+ 806,
324
+ 885,
325
+ 919
326
+ ],
327
+ "page_idx": 1
328
+ },
329
+ {
330
+ "type": "page_number",
331
+ "text": "5748",
332
+ "bbox": [
333
+ 480,
334
+ 928,
335
+ 521,
336
+ 940
337
+ ],
338
+ "page_idx": 1
339
+ },
340
+ {
341
+ "type": "text",
342
+ "text": "statistical analytics on i.i.d. data with LDP (Murakami and Kawamoto, 2019; Nie et al., 2019), it is much more challenging to achieve good utility-privacy trade-offs for LMs with LDP. To improve the model utility, existing methods (Feyisetan et al., 2020; Qu et al., 2021; Yue et al., 2021) rely on a relaxed notion of metric local differential privacy (Chatzikokolakis et al., 2013) (MLDP, also known as $d_{\\chi}$ -privacy) for text sanitization. However, they either achieve reasonable accuracy only at a very low privacy protection level (e.g., with a privacy parameter $\\epsilon > 10$ ) or become unusable (around $50\\%$ accuracy rate for the benchmark binary classification tasks) with appropriate privacy guarantees (e.g., $\\epsilon = 2$ ). Thus, there remains much room for improvement in terms of utility-privacy trade-off for differentially private text sanitization, which is the goal of this work.",
343
+ "bbox": [
344
+ 112,
345
+ 84,
346
+ 489,
347
+ 374
348
+ ],
349
+ "page_idx": 2
350
+ },
351
+ {
352
+ "type": "text",
353
+ "text": "3 Preliminaries",
354
+ "text_level": 1,
355
+ "bbox": [
356
+ 112,
357
+ 385,
358
+ 265,
359
+ 399
360
+ ],
361
+ "page_idx": 2
362
+ },
363
+ {
364
+ "type": "text",
365
+ "text": "Before introducing our CusText mechanism, we briefly review the key concepts, including $\\epsilon$ -DP and exponential mechanism (EM).",
366
+ "bbox": [
367
+ 112,
368
+ 410,
369
+ 487,
370
+ 458
371
+ ],
372
+ "page_idx": 2
373
+ },
374
+ {
375
+ "type": "text",
376
+ "text": "Definition 1 (ε-differential privacy (Dwork et al., 2006)). For a given privacy parameter $\\epsilon \\geq 0$ , all pairs of adjacent inputs $x, x' \\in \\mathcal{X}$ , and every possible output $y \\in \\mathcal{Y}$ , a randomized mechanism $\\mathcal{M}$ is $\\epsilon$ -differentially private (DP) if it holds that",
377
+ "bbox": [
378
+ 112,
379
+ 462,
380
+ 489,
381
+ 542
382
+ ],
383
+ "page_idx": 2
384
+ },
385
+ {
386
+ "type": "equation",
387
+ "text": "\n$$\n\\frac {\\Pr [ \\mathcal {M} (x) = y ]}{\\Pr [ \\mathcal {M} (x ^ {\\prime}) = y ]} \\leq e ^ {\\epsilon}. \\tag {1}\n$$\n",
388
+ "text_format": "latex",
389
+ "bbox": [
390
+ 216,
391
+ 551,
392
+ 487,
393
+ 586
394
+ ],
395
+ "page_idx": 2
396
+ },
397
+ {
398
+ "type": "text",
399
+ "text": "By definition, a smaller value of $\\epsilon$ corresponds to a higher level of privacy protection. Conceptually, the notion of $\\epsilon$ -DP means that an unlimited adversary cannot distinguish the two probabilistic ensembles with sufficiently small $\\epsilon$ because the probabilities of adjacent tokens producing the same output token $y$ are similar. In the context of NLP, we consider any pair of input tokens that share the same output set $\\mathcal{V}$ to be adjacent to each other. In the rest of this paper, we follow the above definition of adjacent inputs for $\\epsilon$ -DP. Next, we define the Exponential Mechanism (EM) commonly used for differentially private item selection from a discrete domain, which naturally fits NLP applications due to the discrete nature of textual data.",
400
+ "bbox": [
401
+ 112,
402
+ 593,
403
+ 489,
404
+ 834
405
+ ],
406
+ "page_idx": 2
407
+ },
408
+ {
409
+ "type": "text",
410
+ "text": "Definition 2 (Exponential Mechanism (McSherry and Talwar, 2007)). For a given scoring function $u: \\mathcal{X} \\times \\mathcal{Y} \\to \\mathbb{R}$ , an exponential mechanism (EM) $\\mathcal{M}(\\mathcal{X}, u, \\mathcal{Y})$ satisfies $\\epsilon$ -differential privacy if it samples an output token $y \\in \\mathcal{Y}$ to perturb the",
411
+ "bbox": [
412
+ 112,
413
+ 838,
414
+ 489,
415
+ 919
416
+ ],
417
+ "page_idx": 2
418
+ },
419
+ {
420
+ "type": "text",
421
+ "text": "input token $x \\in \\mathcal{X}$ with probability proportional to $e^{\\frac{\\epsilon \\cdot u(x, y)}{2\\Delta u}}$ , where $u(x, y)$ denotes the score of output token $y$ for input token $x$ . In addition, we use $\\Delta u := \\max_{y \\in \\mathcal{Y}} \\max_{x, x' \\in \\mathcal{X}} |u(x, y) - u(x', y)|$ to denote the sensitivity of $u$ for EM.",
422
+ "bbox": [
423
+ 507,
424
+ 84,
425
+ 884,
426
+ 168
427
+ ],
428
+ "page_idx": 2
429
+ },
430
+ {
431
+ "type": "text",
432
+ "text": "From Definition 2, we can see that smaller sensitivity makes it harder for adversaries to distinguish the original token from its adjacent tokens. In practice, for simplicity, we can normalize the scoring function $u$ to scale its sensitivity $\\Delta u$ to a specific real number (e.g., 1). As such, the sampling probability of each output token $y$ for input token $x$ is only related to $u(x,y)$ , as $\\epsilon$ and $\\Delta u$ are known beforehand, and a larger $u(x,y)$ indicates a higher sampling probability.",
433
+ "bbox": [
434
+ 507,
435
+ 170,
436
+ 884,
437
+ 331
438
+ ],
439
+ "page_idx": 2
440
+ },
441
+ {
442
+ "type": "text",
443
+ "text": "In an NLP task, we suppose that each document $D = \\langle R_i \\rangle_{i=1}^m$ contains $m$ records and each record $R = \\langle t_j \\rangle_{j=1}^n$ contains $n$ tokens. We formulate our text sanitization task as follows: Given an input document $D$ containing sensitive information, a set $\\mathcal{X}$ of all possible input tokens, a set $\\mathcal{Y}$ of all possible output tokens, and a differentially private mechanism $\\mathcal{M}$ (e.g., EM in this work), it performs the mechanism $\\mathcal{M}$ on each input token $t_j \\in D$ to replace it with an output token $t_j'$ from $\\mathcal{Y}$ if $t_j \\in \\mathcal{X}$ . All the tokens after replacement form the sanitized document, i.e., $D' = \\langle R_i' \\rangle_{i=1}^m$ and $R' = \\langle t_j' \\rangle_{j=1}^n$ .",
444
+ "bbox": [
445
+ 507,
446
+ 332,
447
+ 882,
448
+ 526
449
+ ],
450
+ "page_idx": 2
451
+ },
452
+ {
453
+ "type": "text",
454
+ "text": "Following the prior work on text sanitization (Qu et al., 2021; Feyisetan et al., 2020; Yue et al., 2021), we consider a semi-honest threat model under the LDP setting where data owners (e.g., individuals or institutions) only submit their sanitized documents to the service provider. Malicious service providers may try to infer sensitive information from their received data. We assume that adversaries only have access to sanitized texts, and all algorithms and mechanisms are publicly known. Moreover, adversaries have unlimited computation resources.",
455
+ "bbox": [
456
+ 507,
457
+ 527,
458
+ 882,
459
+ 702
460
+ ],
461
+ "page_idx": 2
462
+ },
463
+ {
464
+ "type": "text",
465
+ "text": "4 The CusText Mechanism",
466
+ "text_level": 1,
467
+ "bbox": [
468
+ 507,
469
+ 712,
470
+ 759,
471
+ 727
472
+ ],
473
+ "page_idx": 2
474
+ },
475
+ {
476
+ "type": "text",
477
+ "text": "An overview of our customized text (CusText) sanitization mechanism is presented in Figure 2. In general, it replaces each token in the original text document with a new token to achieve the privacy guarantee. It consists of two components: (1) a mapping function $f_{\\mathrm{map}}: \\mathcal{X} \\to \\{\\mathcal{Y}' \\subseteq \\mathcal{Y}\\}$ that determines the output set $\\mathcal{Y}_j'$ for each input token $x_j \\in \\mathcal{X}$ based on semantic relevance; (2) a sampling function $f_{\\mathrm{sample}}: \\mathcal{X}' \\to \\mathcal{Y}'$ based on the exponential mechanism to sample a new token from",
478
+ "bbox": [
479
+ 507,
480
+ 737,
481
+ 884,
482
+ 897
483
+ ],
484
+ "page_idx": 2
485
+ },
486
+ {
487
+ "type": "page_footnote",
488
+ "text": "$\\mathbf{\\Pi}^3\\mathrm{For}$ any $\\mathcal{V}'\\subseteq \\mathcal{V},\\mathcal{X}' = \\{x\\in \\mathcal{X}\\mid f_{\\mathrm{map}}(x) = \\mathcal{Y}'\\}$",
489
+ "bbox": [
490
+ 529,
491
+ 902,
492
+ 840,
493
+ 919
494
+ ],
495
+ "page_idx": 2
496
+ },
497
+ {
498
+ "type": "page_number",
499
+ "text": "5749",
500
+ "bbox": [
501
+ 480,
502
+ 927,
503
+ 519,
504
+ 940
505
+ ],
506
+ "page_idx": 2
507
+ },
508
+ {
509
+ "type": "image",
510
+ "img_path": "images/e04efdbc1d60965103b1a40cc1f890e5982f49da4a7fef8518fee4e5964dd825.jpg",
511
+ "image_caption": [
512
+ "Figure 2: An overview of the CusText method."
513
+ ],
514
+ "image_footnote": [],
515
+ "bbox": [
516
+ 114,
517
+ 80,
518
+ 487,
519
+ 218
520
+ ],
521
+ "page_idx": 3
522
+ },
523
+ {
524
+ "type": "image",
525
+ "img_path": "images/070d6b3bd75911cf6dc1f8c2ce644dc5aecdaf412027b6cf31f1462db9ca908a.jpg",
526
+ "image_caption": [
527
+ "Figure 3: A comparison of the mapping schemes of SANTEXT and CusText."
528
+ ],
529
+ "image_footnote": [],
530
+ "bbox": [
531
+ 114,
532
+ 260,
533
+ 485,
534
+ 372
535
+ ],
536
+ "page_idx": 3
537
+ },
538
+ {
539
+ "type": "text",
540
+ "text": "an output set to sanitize the input token. Specifically, our CusText mechanism first obtains the output set $\\mathcal{Y}_j^\\prime$ for each $t_j\\in D$ according to $f_{\\mathrm{map}}$ , i.e., $\\mathcal{Y}_j^\\prime = f_{\\mathrm{map}}(t_j)$ , then samples an output token $t_j^\\prime$ from $\\mathcal{Y}_j^\\prime$ according to $f_{\\mathrm{sample}}$ , i.e., $t_j^\\prime = f_{\\mathrm{sample}}(t_j)$ . Finally, after applying CusText on each input token $t_j$ in $D$ , the sanitized document $D^{\\prime}$ is formed by all output tokens.",
541
+ "bbox": [
542
+ 112,
543
+ 445,
544
+ 489,
545
+ 575
546
+ ],
547
+ "page_idx": 3
548
+ },
549
+ {
550
+ "type": "text",
551
+ "text": "4.1 Mapping Function",
552
+ "text_level": 1,
553
+ "bbox": [
554
+ 112,
555
+ 590,
556
+ 307,
557
+ 606
558
+ ],
559
+ "page_idx": 3
560
+ },
561
+ {
562
+ "type": "text",
563
+ "text": "In our CusText mechanism, the mapping function $f_{\\mathrm{map}}: \\mathcal{X} \\to \\{\\mathcal{Y}' \\subseteq \\mathcal{Y}\\}$ decides the output set for each input token. If a bunch of input tokens in $\\mathcal{X}$ are mapped to the same output set $\\mathcal{Y}'$ , we say that they belong to the same input set $\\mathcal{X}' \\subseteq \\mathcal{X}$ and are adjacent to each other. For the SANTEXT mechanism (Yue et al., 2021), the function $f_{\\mathrm{map}}: \\mathcal{X} \\to \\mathcal{Y}$ simply maps every input token $x \\in \\mathcal{X}$ to all tokens in the output set $\\mathcal{Y}$ . Since the size of the output set is excessively large in SANTEXT, the chances that the output token is semantically irrelevant to the original token become higher if the privacy budget is tight, thus leading to poor model utility. To overcome the above problem, CusText customizes the output set of each input token. A comparison of the mapping schemes of CusText and SANTEXT is shown in Figure 3. Before introducing how to construct $f_{\\mathrm{map}}$ , we first discuss the requirements for mapping generation.",
564
+ "bbox": [
565
+ 112,
566
+ 613,
567
+ 489,
568
+ 919
569
+ ],
570
+ "page_idx": 3
571
+ },
572
+ {
573
+ "type": "text",
574
+ "text": "Algorithm 1 Token Mapping Generation",
575
+ "text_level": 1,
576
+ "bbox": [
577
+ 510,
578
+ 84,
579
+ 815,
580
+ 99
581
+ ],
582
+ "page_idx": 3
583
+ },
584
+ {
585
+ "type": "text",
586
+ "text": "Input: Customization parameter $K$ , input set $\\mathcal{X}$ , output set $\\mathcal{Y} = \\mathcal{X}$ , similarity measure $d$",
587
+ "bbox": [
588
+ 509,
589
+ 102,
590
+ 880,
591
+ 127
592
+ ],
593
+ "page_idx": 3
594
+ },
595
+ {
596
+ "type": "text",
597
+ "text": "Output: Mapping Function $f_{\\mathrm{map}}$",
598
+ "bbox": [
599
+ 510,
600
+ 128,
601
+ 715,
602
+ 139
603
+ ],
604
+ "page_idx": 3
605
+ },
606
+ {
607
+ "type": "list",
608
+ "sub_type": "text",
609
+ "list_items": [
610
+ "1: while $|\\mathcal{X}|\\geq K$ do",
611
+ "2: Pick an arbitrary token $x$ from $\\mathcal{X}$",
612
+ "3: Initialize an output set $\\mathcal{Y}' = \\{x\\}$ for $x$",
613
+ "4: for all $y \\in \\mathcal{V} \\setminus \\{x\\}$ do",
614
+ "5: Compute the similarity $d(x,y)$ of $x$ and $y$",
615
+ "6: Add the top- $(K - 1)$ tokens that are semantically closest to $x$ to $\\mathcal{Y}^{\\prime}$ based on $d(\\cdot ,\\cdot)$",
616
+ "7: for all $x' \\in \\mathcal{Y}'$ do",
617
+ "8: Assign the output set of $x'$ as $y'$ .",
618
+ "9: Update $\\mathcal{X} \\gets \\mathcal{X} \\setminus \\mathcal{Y}'$ and $\\mathcal{Y} \\gets \\mathcal{Y} \\setminus \\mathcal{Y}'$",
619
+ "10: Perform Lines 2-9 for the remaining tokens in $\\mathcal{X}$ and $\\mathcal{Y}$ with customization parameter $K^{\\prime} = |\\mathcal{X}|$",
620
+ "11: return $f_{\\mathrm{map}}$"
621
+ ],
622
+ "bbox": [
623
+ 509,
624
+ 140,
625
+ 882,
626
+ 293
627
+ ],
628
+ "page_idx": 3
629
+ },
630
+ {
631
+ "type": "text",
632
+ "text": "Mapping Strategy. According to the sizes of $\\mathcal{X}'$ and $\\mathcal{Y}'$ as indicated by the mapping function $f_{\\mathrm{map}}$ , we can categorize the token mappings into four types: 1-to-1, $N$ -to-1, 1-to- $N$ , and $N$ -to- $M$ , where $1$ , $N$ , and $M$ denote the size of the input/output token sets and $N$ , $M > 1$ . Theoretically, CusText can provide $\\epsilon$ -differential privacy protection to all input tokens only if the mappings of all input tokens in the set $\\mathcal{X}$ are $N$ -to- $M$ or $N$ -to-1 mappings so that every input token in $\\mathcal{X}$ has at least one adjacent token. This is because the goal of applying $\\epsilon$ -DP is to make any two adjacent tokens indistinguishable so that the input token cannot be effectively inferred. Moreover, following prior work (Feyisetan et al., 2020; Yue et al., 2021), we consider that $\\mathcal{X}$ is equal to $\\mathcal{Y}$ (i.e., $\\mathcal{X} = \\mathcal{Y}$ ) in CusText, as they both correspond to the vocabulary of a specific language. Also, any input token $x$ is always included in its output set because it must be the closest to itself. Next, we describe our mapping generation that can satisfy all the above requirements.",
633
+ "bbox": [
634
+ 507,
635
+ 329,
636
+ 884,
637
+ 668
638
+ ],
639
+ "page_idx": 3
640
+ },
641
+ {
642
+ "type": "text",
643
+ "text": "Mapping Function Generation. The generation of the mapping function $f_{\\mathrm{map}}: \\mathcal{X} \\to \\{\\mathcal{Y}' \\subseteq \\mathcal{Y}\\}$ is to assign the customized output set for each input token based on semantic relevance. The semantic relevance can be defined by any similarity measure $d: \\mathcal{X} \\times \\mathcal{Y} \\to \\mathbb{R}$ . In practice, we use the Euclidean distance or cosine similarity on the vector representations of tokens, such as Word2Vec (Mikolov et al., 2013), GloVe (Pennington et al., 2014), and Counter-Fitting (Mrksic et al., 2016) as the similarity measure. Then, we fix the sizes of all output sets to $K$ . Specifically, we pick an arbitrary unmapped token $x \\in \\mathcal{X}$ , find the $K$ tokens semantically closest to $x$ , generate an $K$ -to- $K$ mapping from all the $K$ tokens to themselves, and remove the mapped",
644
+ "bbox": [
645
+ 507,
646
+ 677,
647
+ 882,
648
+ 919
649
+ ],
650
+ "page_idx": 3
651
+ },
652
+ {
653
+ "type": "page_number",
654
+ "text": "5750",
655
+ "bbox": [
656
+ 480,
657
+ 927,
658
+ 521,
659
+ 940
660
+ ],
661
+ "page_idx": 3
662
+ },
663
+ {
664
+ "type": "text",
665
+ "text": "tokens from $\\mathcal{X}$ and $\\mathcal{Y}$ at each round until either all tokens are mapped or fewer than $K$ tokens remain unmapped. In the latter case, the remaining tokens will constitute a $K^{\\prime}$ -to- $K^{\\prime}$ mapping where $K^{\\prime} \\in [1, K)$ . The pseudocode of generating the mapping function $f_{\\mathrm{map}}$ is presented in Algorithm 1.",
666
+ "bbox": [
667
+ 112,
668
+ 84,
669
+ 490,
670
+ 181
671
+ ],
672
+ "page_idx": 4
673
+ },
674
+ {
675
+ "type": "text",
676
+ "text": "4.2 Sampling Function",
677
+ "text_level": 1,
678
+ "bbox": [
679
+ 112,
680
+ 191,
681
+ 310,
682
+ 206
683
+ ],
684
+ "page_idx": 4
685
+ },
686
+ {
687
+ "type": "text",
688
+ "text": "Based on the mapping function $f_{\\mathrm{map}}: \\mathcal{X} \\to \\{\\mathcal{Y}' \\subseteq \\mathcal{Y}\\}$ , a sampling function $f_{\\mathrm{sample}}: \\mathcal{X}' \\to \\mathcal{Y}'$ is designed to sample the output token for each input token. CusText adopts the exponential mechanism (McSherry and Talwar, 2007) (EM) for sampling. We need to design an appropriate scoring function for EM to strike a good utility-privacy trade-off. We obey the following two rules when designing the scoring function $u: \\mathcal{X}' \\times \\mathcal{Y}' \\to \\mathbb{R}$ .",
689
+ "bbox": [
690
+ 112,
691
+ 211,
692
+ 489,
693
+ 356
694
+ ],
695
+ "page_idx": 4
696
+ },
697
+ {
698
+ "type": "list",
699
+ "sub_type": "text",
700
+ "list_items": [
701
+ "1. The score of each pair of input and output tokens should be bounded, i.e., $\\forall x\\in \\mathcal{X}^{\\prime}$ $\\forall y\\in \\mathcal{Y}^{\\prime},u(x,y) < B$ , so that the sensitivity $\\Delta u$ of $u$ is bounded for satisfying $\\epsilon$ -DP.",
702
+ "2. The higher the semantic similarity between a pair of input and output tokens is, the higher the score is, i.e., $\\forall x\\in \\mathcal{X}^{\\prime},\\forall y,y^{\\prime}\\in \\mathcal{Y}^{\\prime}$ , if $y$ is semantically closer to $x$ than $y^\\prime$ , $u(x,y) > u(x,y')$ . This ensures the candidates semantically closer to $x$ have higher probabilities of being sampled, which inherits the advantage of $d_{\\chi}$ -privacy (Chatzikokolakis et al., 2013)."
703
+ ],
704
+ "bbox": [
705
+ 127,
706
+ 365,
707
+ 489,
708
+ 569
709
+ ],
710
+ "page_idx": 4
711
+ },
712
+ {
713
+ "type": "text",
714
+ "text": "For the scoring function, we are based on the same similarity function as used in the mapping scheme, e.g., Euclidean distance or cosine similarity on the vector representations of tokens (Mikolov et al., 2013; Pennington et al., 2014; Mrksic et al., 2016). Generally, according to the correlation between scores and semantic closeness, all the similarity measures can be categorized into two types, i.e., negative correlation and positive correlation. For instance, Euclidean distance and cosine similarity are negative and positive correlation measures, respectively, as a smaller Euclidean distance and a larger cosine value between two vectors imply higher semantic closeness of their corresponding tokens. Next, we will design scoring functions for both types of similarity measures.",
715
+ "bbox": [
716
+ 112,
717
+ 577,
718
+ 489,
719
+ 834
720
+ ],
721
+ "page_idx": 4
722
+ },
723
+ {
724
+ "type": "text",
725
+ "text": "Scoring Function for Negative Correlation Measures. We take Euclidean distance as an example to design the scoring function $u: \\mathcal{X}' \\times \\mathcal{Y}' \\to \\mathbb{R}$ . For any input set $\\mathcal{X}'$ and its corresponding output set $\\mathcal{Y}'$ , we first compute the Euclidean distance $d(x, y)$",
726
+ "bbox": [
727
+ 112,
728
+ 839,
729
+ 489,
730
+ 920
731
+ ],
732
+ "page_idx": 4
733
+ },
734
+ {
735
+ "type": "code",
736
+ "sub_type": "algorithm",
737
+ "code_caption": [
738
+ "Algorithm 2 Document Sanitization"
739
+ ],
740
+ "code_body": "Input: Original document $D = \\langle R_i\\rangle_{i = 1}^m$ sampling function $f_{\\mathrm{sample}}$ ,stopword list $T$ \nOutput: Sanitized document $D^{\\prime}$ \n1: Initialize the sanitized document $D^{\\prime} = \\emptyset$ \n2: for all record $R\\in D$ do \n3: Initialize the sanitized record $R^{\\prime} = \\emptyset$ \n4: for all token $x\\in R$ do \n5: if CusText $^+$ is used and $x\\in T$ then \n6: Append $x$ to $R^{\\prime}$ \n7: else \n8: $x^{\\prime}\\gets f_{\\mathrm{sample}}(x)$ and append $x$ to $R^{\\prime}$ \n9: Add $R^{\\prime}$ to $D^{\\prime}$ \n10: return $D^{\\prime}$",
741
+ "bbox": [
742
+ 509,
743
+ 102,
744
+ 880,
745
+ 260
746
+ ],
747
+ "page_idx": 4
748
+ },
749
+ {
750
+ "type": "text",
751
+ "text": "between each $x \\in \\mathcal{X}'$ and $y \\in \\mathcal{Y}'$ . Specifically, we have $d(x,y) = \\| \\Phi(x) - \\Phi(y) \\|_2$ , where $\\Phi(x)$ and $\\Phi(y)$ are the vector representations of $x$ and $y$ , respectively. Then, we normalize the distances of all pairs of tokens to the range $[0,1]$ as $d'(x,y) = \\frac{d(x,y) - d_{min}}{d_{max} - d_{min}}$ , where $d_{min} = \\min_{x \\in \\mathcal{X}', y \\in \\mathcal{Y}'} d(x,y)$ and $d_{max} = \\max_{x \\in \\mathcal{X}', y \\in \\mathcal{Y}'} d(x,y)$ . Finally, we transform the normalized distance $d'(x,y)$ into the score of output token $y$ for input token $x$ as $u(x,y) = -d'(x,y)$ . After the above transformation, a more similar pair $x,y$ of input and output tokens has a higher score $u(x,y)$ . Finally, by repeating the above steps on all disjoint partitions of adjacent tokens with the same $\\mathcal{X}'$ and $\\mathcal{Y}'$ , we have obtained the scoring functions for all tokens.",
752
+ "bbox": [
753
+ 507,
754
+ 288,
755
+ 884,
756
+ 532
757
+ ],
758
+ "page_idx": 4
759
+ },
760
+ {
761
+ "type": "text",
762
+ "text": "Scoring Function for Positive Correlation Measures. We take cosine similarity as another example to design the scoring function $u$ . For any input set $\\mathcal{X}'$ and its corresponding output set $\\mathcal{Y}'$ , we also compute the cosine similarity $\\cos(x, y)$ between each $x \\in \\mathcal{X}'$ and $y \\in \\mathcal{Y}'$ , where $\\cos(x, y) = \\frac{\\langle \\Phi(x), \\Phi(y) \\rangle}{\\|\\Phi(x)\\| \\cdot \\|\\Phi(y)\\|}$ and $\\Phi(x)$ and $\\Phi(y)$ are the vector representations of $x$ and $y$ . Then, the normalization procedure is the same as that for Euclidean distance, but we use the normalized distance, instead of its additive inverse, in the score function, i.e., $u(x, y) = \\frac{d(x, y) - d_{\\min}}{d_{\\max} - d_{\\min}}$ . Finally, we repeat the above steps on all disjoint partitions of adjacent tokens to obtain all scoring functions.",
763
+ "bbox": [
764
+ 507,
765
+ 539,
766
+ 884,
767
+ 766
768
+ ],
769
+ "page_idx": 4
770
+ },
771
+ {
772
+ "type": "text",
773
+ "text": "Sampling Procedure. After acquiring the scoring function $u$ for each input token $x$ , the sampling function $f_{\\mathrm{sample}}$ is used to generate the sanitized token $x'$ for $x$ based on the exponential mechanism $\\mathcal{M}(\\{x\\}, u, \\mathcal{Y}')$ with a privacy parameter $\\epsilon > 0$ . The pseudocode of sanitizing a document based on $f_{\\mathrm{sample}}$ is provided in Algorithm 2. Theoretically, it guarantees that $f_{\\mathrm{sample}}$ satisfies $\\epsilon$ -DP. For any input set $\\mathcal{X}'$ and its corresponding output set $\\mathcal{Y}'$ ,",
774
+ "bbox": [
775
+ 507,
776
+ 774,
777
+ 885,
778
+ 919
779
+ ],
780
+ "page_idx": 4
781
+ },
782
+ {
783
+ "type": "page_number",
784
+ "text": "5751",
785
+ "bbox": [
786
+ 480,
787
+ 927,
788
+ 517,
789
+ 940
790
+ ],
791
+ "page_idx": 4
792
+ },
793
+ {
794
+ "type": "text",
795
+ "text": "the sensitivity $\\Delta u$ between any two adjacent input tokens $x, x' \\in \\mathcal{X}'$ is bound by 1 according to the design of the scoring function $u$ , i.e.,",
796
+ "bbox": [
797
+ 112,
798
+ 84,
799
+ 487,
800
+ 134
801
+ ],
802
+ "page_idx": 5
803
+ },
804
+ {
805
+ "type": "equation",
806
+ "text": "\n$$\n\\Delta u = \\max _ {y \\in \\mathcal {Y} ^ {\\prime}} \\max _ {x, x ^ {\\prime} \\in \\mathcal {X} ^ {\\prime}} | u (x, y) - u (x ^ {\\prime}, y) | = 1\n$$\n",
807
+ "text_format": "latex",
808
+ "bbox": [
809
+ 137,
810
+ 142,
811
+ 463,
812
+ 168
813
+ ],
814
+ "page_idx": 5
815
+ },
816
+ {
817
+ "type": "text",
818
+ "text": "Given a privacy parameter $\\epsilon > 0$ , the probability of obtaining an output token $y \\in \\mathcal{Y}'$ for an input token $x \\in \\mathcal{X}'$ is as follows:",
819
+ "bbox": [
820
+ 112,
821
+ 175,
822
+ 487,
823
+ 224
824
+ ],
825
+ "page_idx": 5
826
+ },
827
+ {
828
+ "type": "equation",
829
+ "text": "\n$$\n\\operatorname * {P r} [ f _ {\\mathrm {s a m p l e}} (x) = y ] = \\frac {\\exp \\left(\\frac {\\epsilon u (x , y)}{2 \\Delta u}\\right)}{\\sum_ {y ^ {\\prime} \\in \\mathcal {Y} ^ {\\prime}} \\exp \\left(\\frac {\\epsilon u (x , y ^ {\\prime})}{2 \\Delta u}\\right)}\n$$\n",
830
+ "text_format": "latex",
831
+ "bbox": [
832
+ 134,
833
+ 231,
834
+ 463,
835
+ 275
836
+ ],
837
+ "page_idx": 5
838
+ },
839
+ {
840
+ "type": "text",
841
+ "text": "We can prove that the sampling function $f_{\\mathrm{sample}}$ satisfies $\\epsilon$ -DP because, for any two input tokens $x, x' \\in \\mathcal{X}'$ and output token $y \\in \\mathcal{Y}'$ , it holds that",
842
+ "bbox": [
843
+ 112,
844
+ 282,
845
+ 487,
846
+ 332
847
+ ],
848
+ "page_idx": 5
849
+ },
850
+ {
851
+ "type": "equation",
852
+ "text": "\n$$\n\\begin{array}{l} \\frac {\\operatorname * {P r} [ f _ {\\text {s a m p l e}} (x) = y ]}{\\operatorname * {P r} [ f _ {\\text {s a m p l e}} (x ^ {\\prime}) = y ]} = \\frac {\\frac {\\exp (\\frac {\\epsilon u (x , y)}{2 \\Delta u})}{\\sum_ {y ^ {\\prime} \\in \\mathcal {Y} ^ {\\prime}} \\exp (\\frac {\\epsilon u (x , y ^ {\\prime})}{2 \\Delta u})}}{\\frac {\\exp (\\frac {\\epsilon u (x ^ {\\prime} , y)}{2 \\Delta u})}{\\sum_ {y ^ {\\prime} \\in \\mathcal {Y} ^ {\\prime}} \\exp (\\frac {\\epsilon u (x ^ {\\prime} , y ^ {\\prime})}{2 \\Delta u})}} \\\\ = e ^ {\\frac {\\epsilon \\cdot (u (x , y) - u (x ^ {\\prime} , y))}{2 \\Delta u}} \\cdot \\Big (\\frac {\\sum_ {y ^ {\\prime} \\in \\mathcal {Y} ^ {\\prime}} \\exp \\big (\\frac {\\epsilon u (x ^ {\\prime} , y ^ {\\prime})}{2 \\Delta u} \\big)}{\\sum_ {y ^ {\\prime} \\in \\mathcal {Y} ^ {\\prime}} \\exp \\big (\\frac {\\epsilon u (x , y ^ {\\prime})}{2 \\Delta u} \\big)} \\Big) \\\\ \\leq e ^ {\\frac {\\epsilon}{2}} \\cdot e ^ {\\frac {\\epsilon}{2}} \\cdot \\left(\\frac {\\sum_ {y ^ {\\prime} \\in \\mathcal {Y} ^ {\\prime}} \\exp (\\frac {\\epsilon u (x , y ^ {\\prime})}{2 \\Delta u})}{\\sum_ {y ^ {\\prime} \\in \\mathcal {Y} ^ {\\prime}} \\exp (\\frac {\\epsilon u (x , y ^ {\\prime})}{2 \\Delta u})}\\right) = e ^ {\\epsilon}. \\\\ \\end{array}\n$$\n",
853
+ "text_format": "latex",
854
+ "bbox": [
855
+ 129,
856
+ 338,
857
+ 470,
858
+ 502
859
+ ],
860
+ "page_idx": 5
861
+ },
862
+ {
863
+ "type": "text",
864
+ "text": "4.3 The CusText+ Mechanism",
865
+ "text_level": 1,
866
+ "bbox": [
867
+ 112,
868
+ 510,
869
+ 366,
870
+ 524
871
+ ],
872
+ "page_idx": 5
873
+ },
874
+ {
875
+ "type": "text",
876
+ "text": "Since not all tokens contain sensitive information, our CusText mechanism that replaces all tokens might be over-protective. Therefore, we can retain non-sensitive original tokens with low privacy risk (e.g., stopwords) to improve the utility of the sanitized text. In practice, we have a predefined list of stopwords $T$ (e.g., the collection of stopwords in the NLTK library), check whether each token $x$ is included in $T$ , and keep $x$ in the sanitized document if $x \\in T$ or replace $x$ with $x' = f_{\\text{sample}}(x)$ otherwise. The above procedure is called the CusText+ mechanism and is also described in Algorithm 2.",
877
+ "bbox": [
878
+ 112,
879
+ 531,
880
+ 489,
881
+ 725
882
+ ],
883
+ "page_idx": 5
884
+ },
885
+ {
886
+ "type": "text",
887
+ "text": "5 Experiments",
888
+ "text_level": 1,
889
+ "bbox": [
890
+ 112,
891
+ 734,
892
+ 260,
893
+ 752
894
+ ],
895
+ "page_idx": 5
896
+ },
897
+ {
898
+ "type": "text",
899
+ "text": "5.1 Experimental Setup",
900
+ "text_level": 1,
901
+ "bbox": [
902
+ 112,
903
+ 760,
904
+ 317,
905
+ 776
906
+ ],
907
+ "page_idx": 5
908
+ },
909
+ {
910
+ "type": "text",
911
+ "text": "Following (Feyisetan et al., 2020; Yue et al., 2021), we choose two datasets from the GLUE benchmark (Wang et al., 2019) and one medical dataset MedSTS (Wang et al., 2020), which all contain sensitive information, in our experiments. Detailed descriptions of the three datasets are as follows:",
912
+ "bbox": [
913
+ 112,
914
+ 781,
915
+ 487,
916
+ 878
917
+ ],
918
+ "page_idx": 5
919
+ },
920
+ {
921
+ "type": "text",
922
+ "text": "- SST-2 is a popular movie reviews dataset with 67k training samples and 1.8k test samples",
923
+ "bbox": [
924
+ 134,
925
+ 887,
926
+ 487,
927
+ 919
928
+ ],
929
+ "page_idx": 5
930
+ },
931
+ {
932
+ "type": "text",
933
+ "text": "for sentiment classification, where accuracy is used as the evaluation metric.",
934
+ "bbox": [
935
+ 544,
936
+ 84,
937
+ 880,
938
+ 115
939
+ ],
940
+ "page_idx": 5
941
+ },
942
+ {
943
+ "type": "list",
944
+ "sub_type": "text",
945
+ "list_items": [
946
+ "- MedSTS is a medical dataset with 1,642 training samples and 412 test samples for semantic similarity computation, where Pearson correlation coefficient is used for evaluation.",
947
+ "- QNLI is a sentence dataset with 105k training samples and 5.2k test samples for sentence-pair classification, where accuracy is used as the evaluation metric."
948
+ ],
949
+ "bbox": [
950
+ 531,
951
+ 126,
952
+ 882,
953
+ 263
954
+ ],
955
+ "page_idx": 5
956
+ },
957
+ {
958
+ "type": "text",
959
+ "text": "In the experiments, we compare CusText with two existing text sanitization mechanisms, i.e., FBDD (Feyisetan et al., 2020) and SANTEXT (Yue et al., 2021). In the training phase, we perform each mechanism to sanitize the training data and then use the sanitized documents to fine-tune the pre-trained model. In the evaluation phase, we sanitize the test data by the same mechanism as used for training. When producing the sanitized documents, both the input set $\\mathcal{X}$ and output set $\\mathcal{Y}$ are assigned to the vocabulary of Counter-Fitting (Mrksic et al., 2016) (of size 65,713), and out-of-vocabulary (OOV) tokens except numbers are retained. For a fair comparison, we adopt the same vocabulary in GloVe (Pennington et al., 2014) as in Counter-Fitting. The Euclidean distance and cosine similarity are used as the similarity measures for GloVe and Counter-Fitting, respectively. We use the stopwords list in NLTK for CusText+. For each downstream task, we set the maximum sequence length to 128 and the training epoch to 3. On the SST2 and QNLI datasets, we set the batch size to 64 and the learning rate to $2 \\times 10^{-5}$ using bert-base-uncased $^4$ as the pre-trained model. On the MedSTS dataset, we set the batch size to 8 and the learning rate to $5 \\times 10^{-5}$ using ClinicalBERT (Alsentzer et al., 2019) as the pre-trained model. Other hyperparameters are the same as those used in the default Transformer model (Wolf et al., 2020). All experiments were conducted on a server with two Intel Xeon Silver 4210R 2.40GHz CPUs and one NVIDIA Tesla V100 SXM2 (32GB).",
960
+ "bbox": [
961
+ 507,
962
+ 274,
963
+ 884,
964
+ 790
965
+ ],
966
+ "page_idx": 5
967
+ },
968
+ {
969
+ "type": "text",
970
+ "text": "5.2 Experimental Results",
971
+ "text_level": 1,
972
+ "bbox": [
973
+ 507,
974
+ 800,
975
+ 726,
976
+ 815
977
+ ],
978
+ "page_idx": 5
979
+ },
980
+ {
981
+ "type": "text",
982
+ "text": "Comparison of Different Mechanisms for Text Sanitization. In this experiment, we fix the customization parameter $K$ to 20 in CusText and CusText+ and vary the privacy parameter $\\epsilon = 1,2,3$",
983
+ "bbox": [
984
+ 507,
985
+ 820,
986
+ 882,
987
+ 885
988
+ ],
989
+ "page_idx": 5
990
+ },
991
+ {
992
+ "type": "page_footnote",
993
+ "text": "4https://huggingface.co/ bert-base-uncased",
994
+ "bbox": [
995
+ 509,
996
+ 892,
997
+ 749,
998
+ 917
999
+ ],
1000
+ "page_idx": 5
1001
+ },
1002
+ {
1003
+ "type": "page_number",
1004
+ "text": "5752",
1005
+ "bbox": [
1006
+ 480,
1007
+ 927,
1008
+ 519,
1009
+ 940
1010
+ ],
1011
+ "page_idx": 5
1012
+ },
1013
+ {
1014
+ "type": "table",
1015
+ "img_path": "images/25dea88dcd160eecfc9bbb55b43aaeeef324b5d8f4bf11f397e6c9f8e51f37bc.jpg",
1016
+ "table_caption": [],
1017
+ "table_footnote": [],
1018
+ "table_body": "<table><tr><td rowspan=\"2\">Mechanisms</td><td colspan=\"3\">SST2</td><td colspan=\"3\">MedSTS</td><td colspan=\"3\">QNLI</td></tr><tr><td>ε = 1</td><td>ε = 2</td><td>ε = 3</td><td>ε = 1</td><td>ε = 2</td><td>ε = 3</td><td>ε = 1</td><td>ε = 2</td><td>ε = 3</td></tr><tr><td>Random</td><td colspan=\"3\">0.5014</td><td colspan=\"3\">0.0382</td><td colspan=\"3\">0.5037</td></tr><tr><td>FBDD</td><td>0.5022</td><td>0.5041</td><td>0.5032</td><td>0.0321</td><td>0.0368</td><td>0.0411</td><td>0.5021</td><td>0.5152</td><td>0.5368</td></tr><tr><td>SANTEXT</td><td>0.5014</td><td>0.4827</td><td>0.5091</td><td>0.0850</td><td>0.1673</td><td>0.1124</td><td>0.5304</td><td>0.5302</td><td>0.5357</td></tr><tr><td>CusText</td><td>0.6985</td><td>0.7172</td><td>0.7029</td><td>0.4957</td><td>0.5112</td><td>0.5242</td><td>0.6926</td><td>0.6884</td><td>0.7133</td></tr><tr><td>SANTEXT+</td><td>0.7211</td><td>0.7446</td><td>0.7260</td><td>0.4143</td><td>0.4271</td><td>0.5423</td><td>0.7607</td><td>0.7636</td><td>0.7493</td></tr><tr><td>CusText+</td><td>0.7501</td><td>0.7452</td><td>0.7683</td><td>0.6172</td><td>0.6316</td><td>0.6213</td><td>0.7528</td><td>0.7602</td><td>0.7740</td></tr><tr><td>Original</td><td colspan=\"3\">0.9050</td><td colspan=\"3\">0.7598</td><td colspan=\"3\">0.9096</td></tr></table>",
1019
+ "bbox": [
1020
+ 169,
1021
+ 80,
1022
+ 828,
1023
+ 193
1024
+ ],
1025
+ "page_idx": 6
1026
+ },
1027
+ {
1028
+ "type": "text",
1029
+ "text": "Table 1: Utility comparison of different sanitization mechanisms at similar privacy levels.",
1030
+ "bbox": [
1031
+ 194,
1032
+ 202,
1033
+ 800,
1034
+ 217
1035
+ ],
1036
+ "page_idx": 6
1037
+ },
1038
+ {
1039
+ "type": "text",
1040
+ "text": "for DP. The evaluation of the effect of $K$ on the performance of CusText will be presented later. Furthermore, we choose GloVe as the token embedding in CusText and CusText+ for a fair comparison since FBDD, SANTEXT, and SANTEXT+ cannot apply the Counter-Fitting embedding. This is because they only work with metric distances (e.g., Euclidean distance in GloVe) due to the inherent limitation of MLDP and thus cannot handle the non-metric cosine similarity in Counter-Fitting. Finally, because a mechanism will be $\\epsilon$ -DP if it is $\\epsilon'$ -MLDP (Chatzikokolakis et al., 2013), where $\\epsilon = \\epsilon' \\cdot d_{max}$ and $d_{max} = \\max_{x \\in \\mathcal{X}, y \\in \\mathcal{Y}} d(x, y)$ , we re-scale the privacy parameter $\\epsilon$ in FBDD, SANTEXT, and SANTEXT+ with $d_{max}$ to align their privacy levels to be similar to our mechanisms.",
1041
+ "bbox": [
1042
+ 112,
1043
+ 242,
1044
+ 489,
1045
+ 500
1046
+ ],
1047
+ "page_idx": 6
1048
+ },
1049
+ {
1050
+ "type": "text",
1051
+ "text": "Table 1 presents the utilities of different text sanitization mechanisms with $\\epsilon$ -DP ( $\\epsilon = 1,2,3$ ) on three datasets. The results demonstrate the huge advantages of CusText compared with two existing mechanisms, i.e., FBDD and SANTEXT, which achieves over $20\\%$ improvements in accuracy on the SST-2 and QNLI datasets and more than $50\\%$ improvement in Pearson correlation coefficient on the MedSTS dataset. Compared with SANTEXT and CusText, their improved versions, i.e., SANTEXT+ and CusText+, exhibit significantly better performance because they keep some original tokens to preserve original semantics. Generally, the results indicate the superior performance of CusText by showing that using a customized, smaller output set for each input token can lead to better utilities at similar (theoretical) privacy levels.",
1052
+ "bbox": [
1053
+ 112,
1054
+ 505,
1055
+ 489,
1056
+ 778
1057
+ ],
1058
+ "page_idx": 6
1059
+ },
1060
+ {
1061
+ "type": "text",
1062
+ "text": "Privacy-Utility Trade-off. Subsequently, we compare SANTEXT and CusText in terms of privacy-utility trade-offs. As shown in (Yue et al., 2021) as well as our previous results, FBDD has lower performance than SANTEXT and CusText and thus is not compared in the remaining experiments anymore. To alleviate the effects of different DP definitions in SANTEXT and CusText, we do not use",
1063
+ "bbox": [
1064
+ 112,
1065
+ 790,
1066
+ 489,
1067
+ 917
1068
+ ],
1069
+ "page_idx": 6
1070
+ },
1071
+ {
1072
+ "type": "image",
1073
+ "img_path": "images/d451f46a1f83aa5094c0a7a43499440083426c733ecbd8892df63958e3734caa.jpg",
1074
+ "image_caption": [
1075
+ "Figure 4: Privacy-utility trade-offs in terms of success rates of mask token inference attacks vs. accuracy rates by varying the privacy parameter $\\epsilon \\in [0.01, 50]$ on the SST-2 dataset. Here, \"Original\" denotes the result on unsanitized data."
1076
+ ],
1077
+ "image_footnote": [],
1078
+ "bbox": [
1079
+ 539,
1080
+ 250,
1081
+ 833,
1082
+ 416
1083
+ ],
1084
+ "page_idx": 6
1085
+ },
1086
+ {
1087
+ "type": "text",
1088
+ "text": "the privacy parameter $\\epsilon$ , which corresponds to the worst possible privacy leakage but may not reveal the privacy protection level in practice. Alternatively, we adopt two privacy attacks to evaluate the privacy protection levels: One is the Mask Token Inference Attack in (Yue et al., 2021), and the other is Query Attack proposed in this work.",
1089
+ "bbox": [
1090
+ 505,
1091
+ 527,
1092
+ 882,
1093
+ 640
1094
+ ],
1095
+ "page_idx": 6
1096
+ },
1097
+ {
1098
+ "type": "text",
1099
+ "text": "We first present the results for mask token inference attacks. To recover raw texts from sanitized texts, an adversary can use the pre-trained BERT model to help infer the original tokens since it is trained via masked language modeling. It replaces each token with a special token \"[MASK]\" in the sanitized text sequentially, inputs the masked text to BERT, and acquires the predicted output of \"[MASK]\" as the original token. Then, we consider the attack successful if the output token is the same as the input. Finally, we compute the success rate among all attacks, denoted as $r_{mask}$ , and define the privacy protection level as $1 - r_{mask}$ .",
1100
+ "bbox": [
1101
+ 505,
1102
+ 643,
1103
+ 884,
1104
+ 852
1105
+ ],
1106
+ "page_idx": 6
1107
+ },
1108
+ {
1109
+ "type": "text",
1110
+ "text": "Figure 4 illustrates the privacy-utility trade-offs of CusText (based on GloVe and Counter-Fitting, respectively) and SANTEXT (based on GloVe) by varying the value of $\\epsilon$ on the SST-2 dataset. The",
1111
+ "bbox": [
1112
+ 507,
1113
+ 854,
1114
+ 882,
1115
+ 917
1116
+ ],
1117
+ "page_idx": 6
1118
+ },
1119
+ {
1120
+ "type": "page_number",
1121
+ "text": "5753",
1122
+ "bbox": [
1123
+ 480,
1124
+ 927,
1125
+ 519,
1126
+ 940
1127
+ ],
1128
+ "page_idx": 6
1129
+ },
1130
+ {
1131
+ "type": "table",
1132
+ "img_path": "images/eab92e11a699bdac2eb51b48d109e5dea4c9a29100ddb5a3ee26074dcb07c742.jpg",
1133
+ "table_caption": [],
1134
+ "table_footnote": [],
1135
+ "table_body": "<table><tr><td rowspan=\"2\">Token</td><td colspan=\"3\">SANTEXT</td><td colspan=\"4\">CusText (GloVe)</td><td colspan=\"4\">CusText (Counter-Fitting)</td></tr><tr><td>ε&#x27; = 1</td><td>ε&#x27; = 2</td><td>ε&#x27; = 3</td><td>ε = 1</td><td>ε = 2</td><td>ε = 3</td><td>ε = 8</td><td>ε = 1</td><td>ε = 2</td><td>ε = 3</td><td>ε = 8</td></tr><tr><td>she</td><td>2350</td><td>35</td><td>4</td><td>1000</td><td>200</td><td>80</td><td>5</td><td>5500</td><td>1000</td><td>320</td><td>4</td></tr><tr><td>car</td><td>1300</td><td>14</td><td>1</td><td>1220</td><td>250</td><td>90</td><td>6</td><td>420000</td><td>90000</td><td>31000</td><td>3200</td></tr><tr><td>alice</td><td>1550</td><td>20</td><td>3</td><td>1190</td><td>240</td><td>100</td><td>6</td><td>1700</td><td>360</td><td>120</td><td>9</td></tr><tr><td>happy</td><td>3200</td><td>55</td><td>4</td><td>1490</td><td>290</td><td>110</td><td>8</td><td>320000</td><td>55000</td><td>21500</td><td>1500</td></tr><tr><td>Accuracy</td><td>0.4959</td><td>0.5799</td><td>0.7958</td><td>0.6985</td><td>0.7172</td><td>0.7029</td><td>0.8155</td><td>0.7117</td><td>0.7370</td><td>0.7298</td><td>0.7957</td></tr></table>",
1136
+ "bbox": [
1137
+ 115,
1138
+ 80,
1139
+ 885,
1140
+ 170
1141
+ ],
1142
+ "page_idx": 7
1143
+ },
1144
+ {
1145
+ "type": "text",
1146
+ "text": "Table 2: Results for query attacks on four selected tokens in the SST-2 dataset.",
1147
+ "bbox": [
1148
+ 231,
1149
+ 178,
1150
+ 764,
1151
+ 193
1152
+ ],
1153
+ "page_idx": 7
1154
+ },
1155
+ {
1156
+ "type": "text",
1157
+ "text": "results confirm that CusText achieves better utility-privacy trade-offs than SANTEXT and remains a relatively good utility (accuracy at around 0.7) when the privacy level approaches 1 (over 0.98). In comparison, SANTEXT degenerates to a random classifier (accuracy at around 0.5). Meanwhile, the results also imply that Counter-Fitting works better with CusText than GloVe. The higher performance of Counter-Fitting can be attributed to its better representations of synonyms.",
1158
+ "bbox": [
1159
+ 112,
1160
+ 219,
1161
+ 490,
1162
+ 381
1163
+ ],
1164
+ "page_idx": 7
1165
+ },
1166
+ {
1167
+ "type": "text",
1168
+ "text": "We then describe the results for query attacks. Since the input token is contained in its corresponding output set and always has the highest score, the probability that it is sampled by $f_{\\text{sample}}$ is also the highest among all output tokens. An adversary can determine the input token by querying the data owner for the sanitized document multiple times, as the input token will have the highest frequency among all output tokens after a sufficiently large number of queries. Thus, we use the smallest number $N$ of queries an adversary needs to infer the input token at a confidence level of $95\\%$ as a new measure of the privacy protection level. Here, the larger the value of $N$ is, the higher the level of privacy protection is. In the experiment, we obtain the value of $N$ by using the Monte Carlo method (Gentle, 2009) to sample the output tokens until the confidence level of determining the input token from the output distribution reaches $95\\%$ .",
1169
+ "bbox": [
1170
+ 115,
1171
+ 384,
1172
+ 489,
1173
+ 690
1174
+ ],
1175
+ "page_idx": 7
1176
+ },
1177
+ {
1178
+ "type": "text",
1179
+ "text": "Table 2 further confirms that CusText achieves better privacy-utility trade-offs than SANTEXT. Although SANTEXT achieves a good utility when $\\epsilon' = 3$ (i.e., with 3-MLDP), it almost provides no privacy protection as input tokens can be inferred by performing only a few queries. CusText (with either GloVe or Counter-Fitting) remains relatively good privacy protection levels when $\\epsilon = 3$ (i.e., with 3-DP) while achieving high utilities. Generally, Counter-Fitting also outperforms GloVe for CusText. But the privacy protections for different tokens vary very much for Counter-Fitting: \"she\" and \"alice\" are more vulnerable than \"car\" and \"happy\". This is because \"she\" and \"alice\" are",
1180
+ "bbox": [
1181
+ 112,
1182
+ 694,
1183
+ 490,
1184
+ 919
1185
+ ],
1186
+ "page_idx": 7
1187
+ },
1188
+ {
1189
+ "type": "image",
1190
+ "img_path": "images/67b6ea87cd41ddef2caa56b944af681671aa74c9dd0588c05ef04fc32773d09e.jpg",
1191
+ "image_caption": [
1192
+ "Figure 5: Privacy-utility trade-offs of CusText with different customization parameters $K$ by varying the privacy parameter $\\epsilon \\in [0.001, 50]$ on the SST-2 dataset."
1193
+ ],
1194
+ "image_footnote": [],
1195
+ "bbox": [
1196
+ 536,
1197
+ 225,
1198
+ 838,
1199
+ 394
1200
+ ],
1201
+ "page_idx": 7
1202
+ },
1203
+ {
1204
+ "type": "text",
1205
+ "text": "mapped with semantically less relevant tokens than themselves in the mapping function generation.",
1206
+ "bbox": [
1207
+ 507,
1208
+ 473,
1209
+ 882,
1210
+ 505
1211
+ ],
1212
+ "page_idx": 7
1213
+ },
1214
+ {
1215
+ "type": "text",
1216
+ "text": "Effect of $K$ on CusText. To test the effect of $K$ on CusText in practice, we study the privacy-utility trade-offs with different customization parameters $K = 5,20,50$ on the SST-2 dataset. We choose the mask token inference attack as the privacy metric since its performance is more semantically related. Then, we use Counter-Fitting for its better performance than GloVe, as depicted previously.",
1217
+ "bbox": [
1218
+ 507,
1219
+ 508,
1220
+ 885,
1221
+ 637
1222
+ ],
1223
+ "page_idx": 7
1224
+ },
1225
+ {
1226
+ "type": "text",
1227
+ "text": "The results for different $K$ 's are presented in Figure 5. We observe that the performance of CusText is generally stable for different $K$ 's. But it achieves slightly better utilities when $K$ is smaller at relatively higher privacy protection levels ( $>0.9$ ). This is because, on the one hand, the semantic similarity of output tokens to the input token will be higher when $K$ is smaller. However, on the other hand, a smaller $K$ will also make it easier to infer the input token, thus lowering the privacy protection levels (e.g., for $K = 5$ , it does not exceed 0.96 even when $\\epsilon$ has been decreased to 0.001).",
1228
+ "bbox": [
1229
+ 507,
1230
+ 639,
1231
+ 885,
1232
+ 831
1233
+ ],
1234
+ "page_idx": 7
1235
+ },
1236
+ {
1237
+ "type": "text",
1238
+ "text": "6 Concluding Remarks",
1239
+ "text_level": 1,
1240
+ "bbox": [
1241
+ 507,
1242
+ 845,
1243
+ 729,
1244
+ 862
1245
+ ],
1246
+ "page_idx": 7
1247
+ },
1248
+ {
1249
+ "type": "text",
1250
+ "text": "In this work, we study the problem of differentially private text sanitization. We propose a novel Cus-Text mechanism consisting of a mapping scheme",
1251
+ "bbox": [
1252
+ 507,
1253
+ 871,
1254
+ 885,
1255
+ 919
1256
+ ],
1257
+ "page_idx": 7
1258
+ },
1259
+ {
1260
+ "type": "page_number",
1261
+ "text": "5754",
1262
+ "bbox": [
1263
+ 480,
1264
+ 928,
1265
+ 521,
1266
+ 940
1267
+ ],
1268
+ "page_idx": 7
1269
+ },
1270
+ {
1271
+ "type": "text",
1272
+ "text": "to assign each input token a customized output set and sampling function generation methods based on the mapping scheme and exponential mechanism to reduce privacy costs while improving the utilities of sanitized texts. Extensive experiments demonstrate that CusText achieves better privacy-utility trade-offs than state-of-the-art text sanitization mechanisms. In the future, we will explore how to improve our mechanism by adaptively allocating privacy costs across tokens and find a better way to decide whether a token is sensitive than based on a pre-defined stopwords list.",
1273
+ "bbox": [
1274
+ 112,
1275
+ 84,
1276
+ 492,
1277
+ 280
1278
+ ],
1279
+ "page_idx": 8
1280
+ },
1281
+ {
1282
+ "type": "text",
1283
+ "text": "Acknowledgements",
1284
+ "text_level": 1,
1285
+ "bbox": [
1286
+ 114,
1287
+ 290,
1288
+ 285,
1289
+ 307
1290
+ ],
1291
+ "page_idx": 8
1292
+ },
1293
+ {
1294
+ "type": "text",
1295
+ "text": "This work was supported by the National Natural Science Foundation of China (under Grant numbers 62202170, 62202169) and Alibaba Group through the Alibaba Innovation Research Program.",
1296
+ "bbox": [
1297
+ 112,
1298
+ 316,
1299
+ 489,
1300
+ 381
1301
+ ],
1302
+ "page_idx": 8
1303
+ },
1304
+ {
1305
+ "type": "text",
1306
+ "text": "Limitations",
1307
+ "text_level": 1,
1308
+ "bbox": [
1309
+ 114,
1310
+ 393,
1311
+ 220,
1312
+ 407
1313
+ ],
1314
+ "page_idx": 8
1315
+ },
1316
+ {
1317
+ "type": "text",
1318
+ "text": "First, as indicated in Table 2, different tokens are not equally vulnerable to privacy attacks. As such, assigning every token with the same output size $K$ and privacy parameter $\\epsilon$ might not be an ideal choice. An improved method would be to adaptively allocate privacy costs across tokens so that all of them are adequately protected. Second, we adopt two simple strategies to decide whether a token is sensitive: assuming all tokens are sensitive or based on a pre-defined stopwords list. However, the prior might be over-protective, but the latter can lead to privacy leakage since stopwords might help infer other sanitized tokens. Therefore, a more flexible and practical way to decide the sensitivity of tokens is required.",
1319
+ "bbox": [
1320
+ 112,
1321
+ 419,
1322
+ 489,
1323
+ 659
1324
+ ],
1325
+ "page_idx": 8
1326
+ },
1327
+ {
1328
+ "type": "text",
1329
+ "text": "References",
1330
+ "text_level": 1,
1331
+ "bbox": [
1332
+ 114,
1333
+ 688,
1334
+ 213,
1335
+ 703
1336
+ ],
1337
+ "page_idx": 8
1338
+ },
1339
+ {
1340
+ "type": "list",
1341
+ "sub_type": "ref_text",
1342
+ "list_items": [
1343
+ "Martín Abadi, Andy Chu, Ian J. Goodfellow, H. Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang. 2016. Deep learning with differential privacy. In CCS, pages 308-318.",
1344
+ "Emily Alsentzer, John Murphy, William Boag, WeiHung Weng, Di Jindi, Tristan Naumann, and Matthew McDermott. 2019. Publicly available clinical BERT embeddings. In Proceedings of the 2nd Clinical Natural Language Processing Workshop, pages 72-78.",
1345
+ "Rohan Anil, Badih Ghazi, Vineet Gupta, Ravi Kumar, and Pasin Manurangsi. 2022. Large-scale differentially private BERT. In EMNLP (Findings), pages 6481-6491."
1346
+ ],
1347
+ "bbox": [
1348
+ 115,
1349
+ 711,
1350
+ 489,
1351
+ 917
1352
+ ],
1353
+ "page_idx": 8
1354
+ },
1355
+ {
1356
+ "type": "list",
1357
+ "sub_type": "ref_text",
1358
+ "list_items": [
1359
+ "Nicholas Carlini, Chang Liu, Ülfar Erlingsson, Jernej Kos, and Dawn Song. 2019. The secret sharer: Evaluating and testing unintended memorization in neural networks. In USENIX Security Symposium, pages 267-284.",
1360
+ "Nicholas Carlini, Florian Tramér, Eric Wallace, Matthew Jagielski, Ariel Herbert-Voss, Katherine Lee, Adam Roberts, Tom B. Brown, Dawn Song, Ulfar Erlingsson, Alina Oprea, and Colin Raffel. 2021. Extracting training data from large language models. In USENIX Security Symposium, pages 2633-2650.",
1361
+ "Konstantinos Chatzikokolakis, Miguel E. Andres, Nicolás Emilio Bordenabe, and Catuscia Palamidessi. 2013. Broadening the scope of differential privacy using metrics. In Privacy Enhancing Technologies (PETS), pages 82-102.",
1362
+ "John C. Duchi, Michael I. Jordan, and Martin J. Wainwright. 2013. Local privacy and statistical minimax rates. In FOCS, pages 429-438.",
1363
+ "Christophe Dupuy, Radhika Arava, Rahul Gupta, and Anna Rumshisky. 2022. An efficient DP-SGD mechanism for large scale NLU models. In ICASSP, pages 4118-4122.",
1364
+ "Cynthia Dwork, Frank McSherry, Kobbi Nissim, and Adam D. Smith. 2006. Calibrating noise to sensitivity in private data analysis. In Theory of Cryptography (TCC), pages 265-284.",
1365
+ "Oluwaseyi Feyisetan, Borja Balle, Thomas Drake, and Tom Diethe. 2020. Privacy- and utility-preserving textual analysis via calibrated multivariate perturbations. In WSDM, pages 178-186.",
1366
+ "Oluwaseyi Feyisetan, Tom Diethe, and Thomas Drake. 2019. Leveraging hierarchical representations for preserving privacy and utility in text. In ICDM, pages 210-219.",
1367
+ "James E. Gentle. 2009. Monte Carlo methods for statistical inference. In Computational Statistics, pages 417-433. Springer.",
1368
+ "Jack Hessel and Alexandra Schofield. 2021. How effective is BERT without word ordering? Implications for language understanding and data privacy. In ACL/IJCNLP (Short Papers), pages 204-211.",
1369
+ "Marija Jegorova, Chaitanya Kaul, Charlie Mayor, Alison Q. O'Neil, Alexander Weir, Roderick Murray-Smith, and Sotirios A. Tsaftaris. 2021. Survey: Leakage and privacy at inference time. arXiv:2107.01614.",
1370
+ "Ninghui Li, Tiancheng Li, and Suresh Venkatasubramanian. 2007. t-closeness: Privacy beyond k-anonymity and l-diversity. In ICDE, pages 106-115.",
1371
+ "Xuechen Li, Florian Tramér, Percy Liang, and Tatsunori Hashimoto. 2022. Large language models can be strong differentially private learners. In *ICLR*."
1372
+ ],
1373
+ "bbox": [
1374
+ 510,
1375
+ 85,
1376
+ 884,
1377
+ 917
1378
+ ],
1379
+ "page_idx": 8
1380
+ },
1381
+ {
1382
+ "type": "page_number",
1383
+ "text": "5755",
1384
+ "bbox": [
1385
+ 480,
1386
+ 928,
1387
+ 519,
1388
+ 940
1389
+ ],
1390
+ "page_idx": 8
1391
+ },
1392
+ {
1393
+ "type": "list",
1394
+ "sub_type": "ref_text",
1395
+ "list_items": [
1396
+ "Lingjuan Lyu, Xuanli He, and Yitong Li. 2020. Differentially private representation for NLP: Formal guarantee and an empirical study on privacy and fairness. In EMNLP (Findings), pages 2355-2365.",
1397
+ "Ashwin Machanavajjhala, Daniel Kifer, Johannes Gehrke, and Muthuramakrishnan Venkitasubramaniam. 2007. L-diversity: Privacy beyond k-anonymity. ACM Trans. Knowl. Discov. Data, 1(1):3:1-3:52.",
1398
+ "Frank McSherry and Kunal Talwar. 2007. Mechanism design via differential privacy. In *FOCS*, pages 94-103.",
1399
+ "Tomás Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. arXiv:1301.3781.",
1400
+ "Fatemehsadat Mireshghallah, Huseyin A. Inan, Marcello Hasegawa, Victor Ruhle, Taylor Berg-Kirkpatrick, and Robert Sim. 2021. Privacy regularization: Joint privacy-utility optimization in LanguageModels. In NAACL-HLT, pages 3799-3807.",
1401
+ "Nikola Mrksic, Diarmuid Řéaghdha, Blaise Thomson, Milica Gasic, Lina Maria Rojas-Barahona, Pei-Hao Su, David Vandyke, Tsung-Hsien Wen, and Steve J. Young. 2016. Counter-fitting word vectors to linguistic constraints. In *NAACL-HLT*, pages 142-148.",
1402
+ "Takao Murakami and Yusuke Kawamoto. 2019. Utility-optimized local differential privacy mechanisms for distribution estimation. In USENIX Security Symposium, pages 1877-1894.",
1403
+ "Yiwen Nie, Wei Yang, Liusheng Huang, Xike Xie, Zhenhua Zhao, and Shaowei Wang. 2019. A utility-optimized framework for personalized private histogram estimation. IEEE Trans. Knowl. Data Eng., 31(4):655-669.",
1404
+ "Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. GloVe: Global vectors for word representation. In EMNLP, pages 1532-1543.",
1405
+ "Chen Qu, Weize Kong, Liu Yang, Mingyang Zhang, Michael Bendersky, and Marc Najork. 2021. Natural language understanding with privacy-preserving BERT. In CIKM, pages 1488-1497.",
1406
+ "Gerard Salton and Chris Buckley. 1988. Term-weighting approaches in automatic text retrieval. Inf. Process. Manag., 24(5):513-523.",
1407
+ "Congzheng Song and Ananth Raghunathan. 2020. Information leakage in embedding models. In CCS, pages 377-390.",
1408
+ "Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2019. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In ICLR."
1409
+ ],
1410
+ "bbox": [
1411
+ 115,
1412
+ 85,
1413
+ 489,
1414
+ 917
1415
+ ],
1416
+ "page_idx": 9
1417
+ },
1418
+ {
1419
+ "type": "list",
1420
+ "sub_type": "ref_text",
1421
+ "list_items": [
1422
+ "Yanshan Wang, Naveed Afzal, Sunyang Fu, Liwei Wang, Feichen Shen, Majid Rastegar-Mojarad, and Hongfang Liu. 2020. MedSTS: a resource for clinical semantic textual similarity. Lang. Resour. Eval., 54(1):57-72.",
1423
+ "Thomas Wolf, Lysandre Debut, et al. 2020. Transformers: State-of-the-art natural language processing. In EMNLP (Demos), pages 38-45.",
1424
+ "Xiang Yue, Minxin Du, Tianhao Wang, Yaliang Li, Huan Sun, and Sherman S. M. Chow. 2021. Differential privacy for text analytics via natural text sanitization. In ACL/IJCNLP (Findings), pages 3853-3866.",
1425
+ "Ying Zhao and Jinjun Chen. 2022. A survey on differential privacy for unstructured data content. ACM Comput. Surv., 54(10s):207:1-207:28."
1426
+ ],
1427
+ "bbox": [
1428
+ 510,
1429
+ 85,
1430
+ 880,
1431
+ 326
1432
+ ],
1433
+ "page_idx": 9
1434
+ },
1435
+ {
1436
+ "type": "page_number",
1437
+ "text": "5756",
1438
+ "bbox": [
1439
+ 480,
1440
+ 928,
1441
+ 519,
1442
+ 940
1443
+ ],
1444
+ "page_idx": 9
1445
+ },
1446
+ {
1447
+ "type": "text",
1448
+ "text": "A For every submission:",
1449
+ "text_level": 1,
1450
+ "bbox": [
1451
+ 114,
1452
+ 107,
1453
+ 322,
1454
+ 122
1455
+ ],
1456
+ "page_idx": 10
1457
+ },
1458
+ {
1459
+ "type": "text",
1460
+ "text": "A1. Did you describe the limitations of your work?",
1461
+ "bbox": [
1462
+ 127,
1463
+ 127,
1464
+ 532,
1465
+ 143
1466
+ ],
1467
+ "page_idx": 10
1468
+ },
1469
+ {
1470
+ "type": "text",
1471
+ "text": "Left blank.",
1472
+ "bbox": [
1473
+ 149,
1474
+ 143,
1475
+ 231,
1476
+ 159
1477
+ ],
1478
+ "page_idx": 10
1479
+ },
1480
+ {
1481
+ "type": "text",
1482
+ "text": "A2. Did you discuss any potential risks of your work?",
1483
+ "bbox": [
1484
+ 127,
1485
+ 170,
1486
+ 552,
1487
+ 186
1488
+ ],
1489
+ "page_idx": 10
1490
+ },
1491
+ {
1492
+ "type": "text",
1493
+ "text": "Left blank.",
1494
+ "bbox": [
1495
+ 149,
1496
+ 187,
1497
+ 231,
1498
+ 200
1499
+ ],
1500
+ "page_idx": 10
1501
+ },
1502
+ {
1503
+ "type": "text",
1504
+ "text": "A3. Do the abstract and introduction summarize the paper's main claims?",
1505
+ "bbox": [
1506
+ 127,
1507
+ 212,
1508
+ 695,
1509
+ 229
1510
+ ],
1511
+ "page_idx": 10
1512
+ },
1513
+ {
1514
+ "type": "text",
1515
+ "text": "Left blank.",
1516
+ "bbox": [
1517
+ 149,
1518
+ 230,
1519
+ 231,
1520
+ 244
1521
+ ],
1522
+ "page_idx": 10
1523
+ },
1524
+ {
1525
+ "type": "text",
1526
+ "text": "□ A4. Have you used AI writing assistants when working on this paper?",
1527
+ "bbox": [
1528
+ 127,
1529
+ 255,
1530
+ 668,
1531
+ 272
1532
+ ],
1533
+ "page_idx": 10
1534
+ },
1535
+ {
1536
+ "type": "text",
1537
+ "text": "Left blank.",
1538
+ "bbox": [
1539
+ 149,
1540
+ 273,
1541
+ 231,
1542
+ 287
1543
+ ],
1544
+ "page_idx": 10
1545
+ },
1546
+ {
1547
+ "type": "text",
1548
+ "text": "B Did you use or create scientific artifacts?",
1549
+ "text_level": 1,
1550
+ "bbox": [
1551
+ 114,
1552
+ 300,
1553
+ 490,
1554
+ 316
1555
+ ],
1556
+ "page_idx": 10
1557
+ },
1558
+ {
1559
+ "type": "text",
1560
+ "text": "Left blank.",
1561
+ "bbox": [
1562
+ 132,
1563
+ 321,
1564
+ 215,
1565
+ 336
1566
+ ],
1567
+ "page_idx": 10
1568
+ },
1569
+ {
1570
+ "type": "text",
1571
+ "text": "B1. Did you cite the creators of artifacts you used?",
1572
+ "bbox": [
1573
+ 127,
1574
+ 347,
1575
+ 529,
1576
+ 363
1577
+ ],
1578
+ "page_idx": 10
1579
+ },
1580
+ {
1581
+ "type": "text",
1582
+ "text": "Left blank.",
1583
+ "bbox": [
1584
+ 149,
1585
+ 363,
1586
+ 231,
1587
+ 379
1588
+ ],
1589
+ "page_idx": 10
1590
+ },
1591
+ {
1592
+ "type": "text",
1593
+ "text": "B2. Did you discuss the license or terms for use and / or distribution of any artifacts?",
1594
+ "bbox": [
1595
+ 127,
1596
+ 390,
1597
+ 778,
1598
+ 406
1599
+ ],
1600
+ "page_idx": 10
1601
+ },
1602
+ {
1603
+ "type": "text",
1604
+ "text": "Left blank.",
1605
+ "bbox": [
1606
+ 149,
1607
+ 407,
1608
+ 231,
1609
+ 422
1610
+ ],
1611
+ "page_idx": 10
1612
+ },
1613
+ {
1614
+ "type": "text",
1615
+ "text": "B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?",
1616
+ "bbox": [
1617
+ 127,
1618
+ 432,
1619
+ 880,
1620
+ 495
1621
+ ],
1622
+ "page_idx": 10
1623
+ },
1624
+ {
1625
+ "type": "text",
1626
+ "text": "Left blank.",
1627
+ "bbox": [
1628
+ 149,
1629
+ 498,
1630
+ 231,
1631
+ 513
1632
+ ],
1633
+ "page_idx": 10
1634
+ },
1635
+ {
1636
+ "type": "text",
1637
+ "text": "B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?",
1638
+ "bbox": [
1639
+ 127,
1640
+ 524,
1641
+ 880,
1642
+ 571
1643
+ ],
1644
+ "page_idx": 10
1645
+ },
1646
+ {
1647
+ "type": "text",
1648
+ "text": "Left blank.",
1649
+ "bbox": [
1650
+ 149,
1651
+ 573,
1652
+ 231,
1653
+ 588
1654
+ ],
1655
+ "page_idx": 10
1656
+ },
1657
+ {
1658
+ "type": "text",
1659
+ "text": "B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?",
1660
+ "bbox": [
1661
+ 127,
1662
+ 599,
1663
+ 880,
1664
+ 631
1665
+ ],
1666
+ "page_idx": 10
1667
+ },
1668
+ {
1669
+ "type": "text",
1670
+ "text": "Left blank.",
1671
+ "bbox": [
1672
+ 149,
1673
+ 632,
1674
+ 231,
1675
+ 646
1676
+ ],
1677
+ "page_idx": 10
1678
+ },
1679
+ {
1680
+ "type": "text",
1681
+ "text": "B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.",
1682
+ "bbox": [
1683
+ 127,
1684
+ 658,
1685
+ 880,
1686
+ 739
1687
+ ],
1688
+ "page_idx": 10
1689
+ },
1690
+ {
1691
+ "type": "text",
1692
+ "text": "Left blank.",
1693
+ "bbox": [
1694
+ 149,
1695
+ 740,
1696
+ 231,
1697
+ 753
1698
+ ],
1699
+ "page_idx": 10
1700
+ },
1701
+ {
1702
+ "type": "text",
1703
+ "text": "C Did you run computational experiments?",
1704
+ "text_level": 1,
1705
+ "bbox": [
1706
+ 114,
1707
+ 765,
1708
+ 495,
1709
+ 781
1710
+ ],
1711
+ "page_idx": 10
1712
+ },
1713
+ {
1714
+ "type": "text",
1715
+ "text": "Left blank.",
1716
+ "bbox": [
1717
+ 132,
1718
+ 787,
1719
+ 215,
1720
+ 802
1721
+ ],
1722
+ "page_idx": 10
1723
+ },
1724
+ {
1725
+ "type": "text",
1726
+ "text": "C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used?",
1727
+ "bbox": [
1728
+ 127,
1729
+ 813,
1730
+ 880,
1731
+ 845
1732
+ ],
1733
+ "page_idx": 10
1734
+ },
1735
+ {
1736
+ "type": "text",
1737
+ "text": "Left blank.",
1738
+ "bbox": [
1739
+ 149,
1740
+ 846,
1741
+ 231,
1742
+ 860
1743
+ ],
1744
+ "page_idx": 10
1745
+ },
1746
+ {
1747
+ "type": "text",
1748
+ "text": "The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.",
1749
+ "bbox": [
1750
+ 112,
1751
+ 868,
1752
+ 877,
1753
+ 892
1754
+ ],
1755
+ "page_idx": 10
1756
+ },
1757
+ {
1758
+ "type": "header",
1759
+ "text": "ACL 2023 Responsible NLP Checklist",
1760
+ "bbox": [
1761
+ 132,
1762
+ 84,
1763
+ 433,
1764
+ 99
1765
+ ],
1766
+ "page_idx": 10
1767
+ },
1768
+ {
1769
+ "type": "page_number",
1770
+ "text": "5757",
1771
+ "bbox": [
1772
+ 480,
1773
+ 928,
1774
+ 519,
1775
+ 940
1776
+ ],
1777
+ "page_idx": 10
1778
+ },
1779
+ {
1780
+ "type": "list",
1781
+ "sub_type": "text",
1782
+ "list_items": [
1783
+ "C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Left blank.",
1784
+ "C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Left blank.",
1785
+ "C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Left blank."
1786
+ ],
1787
+ "bbox": [
1788
+ 127,
1789
+ 84,
1790
+ 880,
1791
+ 282
1792
+ ],
1793
+ "page_idx": 11
1794
+ },
1795
+ {
1796
+ "type": "text",
1797
+ "text": "D Did you use human annotators (e.g., crowdworkers) or research with human participants?",
1798
+ "text_level": 1,
1799
+ "bbox": [
1800
+ 112,
1801
+ 293,
1802
+ 877,
1803
+ 309
1804
+ ],
1805
+ "page_idx": 11
1806
+ },
1807
+ {
1808
+ "type": "text",
1809
+ "text": "Left blank.",
1810
+ "bbox": [
1811
+ 132,
1812
+ 313,
1813
+ 213,
1814
+ 329
1815
+ ],
1816
+ "page_idx": 11
1817
+ },
1818
+ {
1819
+ "type": "list",
1820
+ "sub_type": "text",
1821
+ "list_items": [
1822
+ "D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Left blank.",
1823
+ "D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Left blank.",
1824
+ "D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Left blank.",
1825
+ "D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Left blank.",
1826
+ "D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Left blank."
1827
+ ],
1828
+ "bbox": [
1829
+ 127,
1830
+ 340,
1831
+ 880,
1832
+ 640
1833
+ ],
1834
+ "page_idx": 11
1835
+ },
1836
+ {
1837
+ "type": "page_number",
1838
+ "text": "5758",
1839
+ "bbox": [
1840
+ 480,
1841
+ 928,
1842
+ 519,
1843
+ 940
1844
+ ],
1845
+ "page_idx": 11
1846
+ }
1847
+ ]
2023/A Customized Text Sanitization Mechanism with Differential Privacy/cecb5923-3636-44eb-8dc9-ff69d69aa2f0_model.json ADDED
The diff for this file is too large to render. See raw diff
 
2023/A Customized Text Sanitization Mechanism with Differential Privacy/cecb5923-3636-44eb-8dc9-ff69d69aa2f0_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:801ff6f4ca5106e3d847b2f574d5e7baca8cde945d2558116b30f62b51cc834e
3
+ size 552801
2023/A Customized Text Sanitization Mechanism with Differential Privacy/full.md ADDED
@@ -0,0 +1,341 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # A Customized Text Sanitization Mechanism with Differential Privacy
2
+
3
+ Huimin Chen $^{1*}$ , Fengran Mo $^{2*}$ , Yanhao Wang $^{1}$ , Cen Chen $^{1\dagger}$ , Jian-Yun Nie $^{2}$ , Chengyu Wang $^{3}$ , Jamie Cui $^{4}$
4
+
5
+ <sup>1</sup>East China Normal University <sup>2</sup>Université de Montréal <sup>3</sup>Alibaba Group <sup>4</sup>Ant Group
6
+
7
+ saichen@stu.ecnu.edu.cn, fengran.mo@umontreal.ca
8
+
9
+ {yhwang, cenchen}@dase.ecnu.edu.cn, Nie@iro.umontreal.ca
10
+
11
+ chywang2013@gmail.com, jamie.cui@outlook.com
12
+
13
+ # Abstract
14
+
15
+ As privacy issues are receiving increasing attention within the Natural Language Processing (NLP) community, numerous methods have been proposed to sanitize texts subject to differential privacy. However, the state-of-the-art text sanitization mechanisms based on metric local differential privacy (MLDP) do not apply to non-metric semantic similarity measures and cannot achieve good trade-offs between privacy and utility. To address the above limitations, we propose a novel Customized Text (CusText) sanitization mechanism based on the original $\epsilon$ -differential privacy (DP) definition, which is compatible with any similarity measure. Furthermore, CusText assigns each input token a customized output set of tokens to provide more advanced privacy protection at the token level. Extensive experiments on several benchmark datasets show that CusText achieves a better trade-off between privacy and utility than existing mechanisms. The code is available at https://github.com/sai4july/CusText.
16
+
17
+ # 1 Introduction
18
+
19
+ In many Natural Language Processing (NLP) applications, input texts often contain sensitive information that can infer the identity of specific persons (Jegorova et al., 2021), leading to potential privacy leakage that impedes privacy-conscious users from releasing data to service providers (Carlini et al., 2019, 2021; Song and Raghunathan, 2020). Moreover, legal restrictions such as $\mathrm{CCPA}^1$ and $\mathrm{GDPR}^2$ may further limit the sharing of sensitive textual data. This makes NLP service providers difficult to collect training data unless the privacy concerns of data owners, including individuals and institutions, are well discussed.
20
+
21
+ ![](images/387a9bee705680d4db12453183db61d0839fcd0c6388397b70380274774e10ba.jpg)
22
+ Figure 1: A privacy-preserving NLP workflow.
23
+
24
+ To address such privacy issues, great efforts (Lyu et al., 2020; Anil et al., 2022; Dupuy et al., 2022; Li et al., 2022; Mireshghallah et al., 2021) have been made to train language models (LMs) with differential privacy (Dwork et al., 2006) (DP), which has been regarded as the de facto standard for privacy-preserving computation. These approaches mainly focus on adding calibrated noise to gradients or text representations during the training phase so that sensitive user data cannot be inferred from trained LMs. Nevertheless, they require service providers to collect the original data for LM training. As such, data owners may still have privacy concerns when service providers are not fully trusted.
25
+
26
+ To solve the privacy problem from the root, a common paradigm is to let data owners sanitize their data locally before releasing them to the service provider, as shown in Figure 1. Generally, such privatization mechanisms (Feyisetan et al., 2019, 2020; Yue et al., 2021) generate a sanitized text document by replacing the original tokens (e.g., characters, words, or $n$ -grams) in the original document sequentially with new tokens sampled from output token sets. Specifically, they adopt the Metric Local Differential Privacy (Chatzikokolakis et al., 2013) (MLDP, also known as $d_{\chi}$ -privacy), a relaxation of the original DP definition, to provide the privacy and utility guarantees simultaneously. On the one hand, MLDP inherits the idea of DP
27
+
28
+ to ensure that the outputs of any adjacent input tokens are indistinguishable to protect the original tokens from being inferred. On the other hand, MLDP also preserves the utility of sanitized texts by assigning higher sampling probabilities to tokens that are semantically closer to the original ones. In these mechanisms, any metric distance (e.g., Euclidean distance) can be used to measure the semantic similarities between tokens.
29
+
30
+ However, the above text sanitization mechanisms suffer from two inherent limitations. First, since MLDP is specific for metric distances satisfying the triangle inequality, they do not apply to non-metric semantic similarity measures in NLP applications such as cosine similarity (Mrksic et al., 2016) and TF-IDF (Salton and Buckley, 1988). Second, they cannot achieve good privacy-utility trade-offs, i.e., either having high privacy costs with insufficient protections or resulting in low accuracy of models trained on sanitized data. We observe that the low accuracy arises as they treat each token in the text equally by assigning each input token with the same output set, which can be excessively large (e.g., the size of the output set is over 80,000). Such a huge output set leads to high costs for MLDP and thus impedes the model's utility when the privacy budget is tight.
31
+
32
+ To this end, we propose a novel Customized Text (CusText) sanitization mechanism that provides more advanced privacy protection at the token level. Specifically, to generalize CusText to all similarity measures, we turn to a mechanism that satisfies the original $\epsilon$ -Differential Privacy ( $\epsilon$ -DP), i.e., Exponential Mechanism (EM) (McSherry and Talwar, 2007), to sample the output for each input token. Meanwhile, we inherit the merit of MLDP by designing an appropriate scoring function for EM to take into account the semantic similarities between tokens for sampling. Then, to achieve a better trade-off between privacy and utility, we design a mapping scheme to assign each input token a customized output set of a much smaller size for token-level privacy protection. Here, we can adjust a customized parameter $K$ that determines the size of the output set for each input token for different utility-privacy trade-offs. Using the mapping scheme, we exclude most of the tokens that are semantically irrelevant to the input token from consideration and reduce the privacy costs caused by excessive output set sizes. As the privacy risks of some tokens, e.g., stopwords, are low in practice,
33
+
34
+ we further propose an improved CusText+ mechanism that skips the stopwords in the sampling process to achieve higher utility without incurring greater privacy losses.
35
+
36
+ Finally, we conduct extensive experiments on three benchmark datasets to demonstrate that CusText achieves better privacy-utility trade-offs than the state-of-the-art text sanitization mechanisms in (Feyisetan et al., 2020; Yue et al., 2021). More particularly, with the same privacy parameter $\epsilon$ , the models trained on texts sanitized by CusText have significantly higher accuracy rates than those sanitized by SANTEXT (Yue et al., 2021). Furthermore, when the utilities of models are comparable, CusText provides better protection against two token inference attacks than SANTEXT.
37
+
38
+ # 2 Related Work
39
+
40
+ There have been numerous studies on the vulnerability of deep learning models (Carlini et al., 2019; Song and Raghunathan, 2020), including language models (Carlini et al., 2021; Zhao and Chen, 2022) (LMs), against privacy attacks. In particular, such attacks can recover sensitive user attributes or raw texts from trained models. Therefore, incorporating privacy mechanisms with rigorous guarantees is vital to protect LMs from privacy attacks.
41
+
42
+ A few attempts at applying anonymization techniques for i.i.d. data (Li et al., 2007; Machanavajhala et al., 2007) fail to provide strong privacy protection for textual data (Zhao and Chen, 2022). Then, many efforts (Lyu et al., 2020; Anil et al., 2022; Dupuy et al., 2022; Hessel and Schofield, 2021; Li et al., 2022; Mireshghallah et al., 2021) have been made to preserve the utility of LMs on textual data with provable differential privacy (DP) guarantees. Following the application of DP in deep learning (Abadi et al., 2016), they mainly focus on adding calibrated noise to gradients or text representations during the training phase for both utility and privacy. However, they need a trustworthy server to collect original texts from data owners for model training and thus cannot be applied to the scenario without trusted servers.
43
+
44
+ To address privacy issues from the root, different (customized) local differential privacy (Duchi et al., 2013; Chatzikokolakis et al., 2013) (LDP) mechanisms have been proposed to allow data owners to sanitize their data locally before releasing them to the server. Due to the high dimensionality and complicated features of textual data, compared with
45
+
46
+ statistical analytics on i.i.d. data with LDP (Murakami and Kawamoto, 2019; Nie et al., 2019), it is much more challenging to achieve good utility-privacy trade-offs for LMs with LDP. To improve the model utility, existing methods (Feyisetan et al., 2020; Qu et al., 2021; Yue et al., 2021) rely on a relaxed notion of metric local differential privacy (Chatzikokolakis et al., 2013) (MLDP, also known as $d_{\chi}$ -privacy) for text sanitization. However, they either achieve reasonable accuracy only at a very low privacy protection level (e.g., with a privacy parameter $\epsilon > 10$ ) or become unusable (around $50\%$ accuracy rate for the benchmark binary classification tasks) with appropriate privacy guarantees (e.g., $\epsilon = 2$ ). Thus, there remains much room for improvement in terms of utility-privacy trade-off for differentially private text sanitization, which is the goal of this work.
47
+
48
+ # 3 Preliminaries
49
+
50
+ Before introducing our CusText mechanism, we briefly review the key concepts, including $\epsilon$ -DP and exponential mechanism (EM).
51
+
52
+ Definition 1 (ε-differential privacy (Dwork et al., 2006)). For a given privacy parameter $\epsilon \geq 0$ , all pairs of adjacent inputs $x, x' \in \mathcal{X}$ , and every possible output $y \in \mathcal{Y}$ , a randomized mechanism $\mathcal{M}$ is $\epsilon$ -differentially private (DP) if it holds that
53
+
54
+ $$
55
+ \frac {\Pr [ \mathcal {M} (x) = y ]}{\Pr [ \mathcal {M} (x ^ {\prime}) = y ]} \leq e ^ {\epsilon}. \tag {1}
56
+ $$
57
+
58
+ By definition, a smaller value of $\epsilon$ corresponds to a higher level of privacy protection. Conceptually, the notion of $\epsilon$ -DP means that an unlimited adversary cannot distinguish the two probabilistic ensembles with sufficiently small $\epsilon$ because the probabilities of adjacent tokens producing the same output token $y$ are similar. In the context of NLP, we consider any pair of input tokens that share the same output set $\mathcal{V}$ to be adjacent to each other. In the rest of this paper, we follow the above definition of adjacent inputs for $\epsilon$ -DP. Next, we define the Exponential Mechanism (EM) commonly used for differentially private item selection from a discrete domain, which naturally fits NLP applications due to the discrete nature of textual data.
59
+
60
+ Definition 2 (Exponential Mechanism (McSherry and Talwar, 2007)). For a given scoring function $u: \mathcal{X} \times \mathcal{Y} \to \mathbb{R}$ , an exponential mechanism (EM) $\mathcal{M}(\mathcal{X}, u, \mathcal{Y})$ satisfies $\epsilon$ -differential privacy if it samples an output token $y \in \mathcal{Y}$ to perturb the
61
+
62
+ input token $x \in \mathcal{X}$ with probability proportional to $e^{\frac{\epsilon \cdot u(x, y)}{2\Delta u}}$ , where $u(x, y)$ denotes the score of output token $y$ for input token $x$ . In addition, we use $\Delta u := \max_{y \in \mathcal{Y}} \max_{x, x' \in \mathcal{X}} |u(x, y) - u(x', y)|$ to denote the sensitivity of $u$ for EM.
63
+
64
+ From Definition 2, we can see that smaller sensitivity makes it harder for adversaries to distinguish the original token from its adjacent tokens. In practice, for simplicity, we can normalize the scoring function $u$ to scale its sensitivity $\Delta u$ to a specific real number (e.g., 1). As such, the sampling probability of each output token $y$ for input token $x$ is only related to $u(x,y)$ , as $\epsilon$ and $\Delta u$ are known beforehand, and a larger $u(x,y)$ indicates a higher sampling probability.
65
+
66
+ In an NLP task, we suppose that each document $D = \langle R_i \rangle_{i=1}^m$ contains $m$ records and each record $R = \langle t_j \rangle_{j=1}^n$ contains $n$ tokens. We formulate our text sanitization task as follows: Given an input document $D$ containing sensitive information, a set $\mathcal{X}$ of all possible input tokens, a set $\mathcal{Y}$ of all possible output tokens, and a differentially private mechanism $\mathcal{M}$ (e.g., EM in this work), it performs the mechanism $\mathcal{M}$ on each input token $t_j \in D$ to replace it with an output token $t_j'$ from $\mathcal{Y}$ if $t_j \in \mathcal{X}$ . All the tokens after replacement form the sanitized document, i.e., $D' = \langle R_i' \rangle_{i=1}^m$ and $R' = \langle t_j' \rangle_{j=1}^n$ .
67
+
68
+ Following the prior work on text sanitization (Qu et al., 2021; Feyisetan et al., 2020; Yue et al., 2021), we consider a semi-honest threat model under the LDP setting where data owners (e.g., individuals or institutions) only submit their sanitized documents to the service provider. Malicious service providers may try to infer sensitive information from their received data. We assume that adversaries only have access to sanitized texts, and all algorithms and mechanisms are publicly known. Moreover, adversaries have unlimited computation resources.
69
+
70
+ # 4 The CusText Mechanism
71
+
72
+ An overview of our customized text (CusText) sanitization mechanism is presented in Figure 2. In general, it replaces each token in the original text document with a new token to achieve the privacy guarantee. It consists of two components: (1) a mapping function $f_{\mathrm{map}}: \mathcal{X} \to \{\mathcal{Y}' \subseteq \mathcal{Y}\}$ that determines the output set $\mathcal{Y}_j'$ for each input token $x_j \in \mathcal{X}$ based on semantic relevance; (2) a sampling function $f_{\mathrm{sample}}: \mathcal{X}' \to \mathcal{Y}'$ based on the exponential mechanism to sample a new token from
73
+
74
+ ![](images/e04efdbc1d60965103b1a40cc1f890e5982f49da4a7fef8518fee4e5964dd825.jpg)
75
+ Figure 2: An overview of the CusText method.
76
+
77
+ ![](images/070d6b3bd75911cf6dc1f8c2ce644dc5aecdaf412027b6cf31f1462db9ca908a.jpg)
78
+ Figure 3: A comparison of the mapping schemes of SANTEXT and CusText.
79
+
80
+ an output set to sanitize the input token. Specifically, our CusText mechanism first obtains the output set $\mathcal{Y}_j^\prime$ for each $t_j\in D$ according to $f_{\mathrm{map}}$ , i.e., $\mathcal{Y}_j^\prime = f_{\mathrm{map}}(t_j)$ , then samples an output token $t_j^\prime$ from $\mathcal{Y}_j^\prime$ according to $f_{\mathrm{sample}}$ , i.e., $t_j^\prime = f_{\mathrm{sample}}(t_j)$ . Finally, after applying CusText on each input token $t_j$ in $D$ , the sanitized document $D^{\prime}$ is formed by all output tokens.
81
+
82
+ # 4.1 Mapping Function
83
+
84
+ In our CusText mechanism, the mapping function $f_{\mathrm{map}}: \mathcal{X} \to \{\mathcal{Y}' \subseteq \mathcal{Y}\}$ decides the output set for each input token. If a bunch of input tokens in $\mathcal{X}$ are mapped to the same output set $\mathcal{Y}'$ , we say that they belong to the same input set $\mathcal{X}' \subseteq \mathcal{X}$ and are adjacent to each other. For the SANTEXT mechanism (Yue et al., 2021), the function $f_{\mathrm{map}}: \mathcal{X} \to \mathcal{Y}$ simply maps every input token $x \in \mathcal{X}$ to all tokens in the output set $\mathcal{Y}$ . Since the size of the output set is excessively large in SANTEXT, the chances that the output token is semantically irrelevant to the original token become higher if the privacy budget is tight, thus leading to poor model utility. To overcome the above problem, CusText customizes the output set of each input token. A comparison of the mapping schemes of CusText and SANTEXT is shown in Figure 3. Before introducing how to construct $f_{\mathrm{map}}$ , we first discuss the requirements for mapping generation.
85
+
86
+ # Algorithm 1 Token Mapping Generation
87
+
88
+ Input: Customization parameter $K$ , input set $\mathcal{X}$ , output set $\mathcal{Y} = \mathcal{X}$ , similarity measure $d$
89
+
90
+ Output: Mapping Function $f_{\mathrm{map}}$
91
+
92
+ 1: while $|\mathcal{X}|\geq K$ do
93
+ 2: Pick an arbitrary token $x$ from $\mathcal{X}$
94
+ 3: Initialize an output set $\mathcal{Y}' = \{x\}$ for $x$
95
+ 4: for all $y \in \mathcal{V} \setminus \{x\}$ do
96
+ 5: Compute the similarity $d(x,y)$ of $x$ and $y$
97
+ 6: Add the top- $(K - 1)$ tokens that are semantically closest to $x$ to $\mathcal{Y}^{\prime}$ based on $d(\cdot ,\cdot)$
98
+ 7: for all $x' \in \mathcal{Y}'$ do
99
+ 8: Assign the output set of $x'$ as $y'$ .
100
+ 9: Update $\mathcal{X} \gets \mathcal{X} \setminus \mathcal{Y}'$ and $\mathcal{Y} \gets \mathcal{Y} \setminus \mathcal{Y}'$
101
+ 10: Perform Lines 2-9 for the remaining tokens in $\mathcal{X}$ and $\mathcal{Y}$ with customization parameter $K^{\prime} = |\mathcal{X}|$
102
+ 11: return $f_{\mathrm{map}}$
103
+
104
+ Mapping Strategy. According to the sizes of $\mathcal{X}'$ and $\mathcal{Y}'$ as indicated by the mapping function $f_{\mathrm{map}}$ , we can categorize the token mappings into four types: 1-to-1, $N$ -to-1, 1-to- $N$ , and $N$ -to- $M$ , where $1$ , $N$ , and $M$ denote the size of the input/output token sets and $N$ , $M > 1$ . Theoretically, CusText can provide $\epsilon$ -differential privacy protection to all input tokens only if the mappings of all input tokens in the set $\mathcal{X}$ are $N$ -to- $M$ or $N$ -to-1 mappings so that every input token in $\mathcal{X}$ has at least one adjacent token. This is because the goal of applying $\epsilon$ -DP is to make any two adjacent tokens indistinguishable so that the input token cannot be effectively inferred. Moreover, following prior work (Feyisetan et al., 2020; Yue et al., 2021), we consider that $\mathcal{X}$ is equal to $\mathcal{Y}$ (i.e., $\mathcal{X} = \mathcal{Y}$ ) in CusText, as they both correspond to the vocabulary of a specific language. Also, any input token $x$ is always included in its output set because it must be the closest to itself. Next, we describe our mapping generation that can satisfy all the above requirements.
105
+
106
+ Mapping Function Generation. The generation of the mapping function $f_{\mathrm{map}}: \mathcal{X} \to \{\mathcal{Y}' \subseteq \mathcal{Y}\}$ is to assign the customized output set for each input token based on semantic relevance. The semantic relevance can be defined by any similarity measure $d: \mathcal{X} \times \mathcal{Y} \to \mathbb{R}$ . In practice, we use the Euclidean distance or cosine similarity on the vector representations of tokens, such as Word2Vec (Mikolov et al., 2013), GloVe (Pennington et al., 2014), and Counter-Fitting (Mrksic et al., 2016) as the similarity measure. Then, we fix the sizes of all output sets to $K$ . Specifically, we pick an arbitrary unmapped token $x \in \mathcal{X}$ , find the $K$ tokens semantically closest to $x$ , generate an $K$ -to- $K$ mapping from all the $K$ tokens to themselves, and remove the mapped
107
+
108
+ tokens from $\mathcal{X}$ and $\mathcal{Y}$ at each round until either all tokens are mapped or fewer than $K$ tokens remain unmapped. In the latter case, the remaining tokens will constitute a $K^{\prime}$ -to- $K^{\prime}$ mapping where $K^{\prime} \in [1, K)$ . The pseudocode of generating the mapping function $f_{\mathrm{map}}$ is presented in Algorithm 1.
109
+
110
+ # 4.2 Sampling Function
111
+
112
+ Based on the mapping function $f_{\mathrm{map}}: \mathcal{X} \to \{\mathcal{Y}' \subseteq \mathcal{Y}\}$ , a sampling function $f_{\mathrm{sample}}: \mathcal{X}' \to \mathcal{Y}'$ is designed to sample the output token for each input token. CusText adopts the exponential mechanism (McSherry and Talwar, 2007) (EM) for sampling. We need to design an appropriate scoring function for EM to strike a good utility-privacy trade-off. We obey the following two rules when designing the scoring function $u: \mathcal{X}' \times \mathcal{Y}' \to \mathbb{R}$ .
113
+
114
+ 1. The score of each pair of input and output tokens should be bounded, i.e., $\forall x\in \mathcal{X}^{\prime}$ $\forall y\in \mathcal{Y}^{\prime},u(x,y) < B$ , so that the sensitivity $\Delta u$ of $u$ is bounded for satisfying $\epsilon$ -DP.
115
+ 2. The higher the semantic similarity between a pair of input and output tokens is, the higher the score is, i.e., $\forall x\in \mathcal{X}^{\prime},\forall y,y^{\prime}\in \mathcal{Y}^{\prime}$ , if $y$ is semantically closer to $x$ than $y^\prime$ , $u(x,y) > u(x,y')$ . This ensures the candidates semantically closer to $x$ have higher probabilities of being sampled, which inherits the advantage of $d_{\chi}$ -privacy (Chatzikokolakis et al., 2013).
116
+
117
+ For the scoring function, we are based on the same similarity function as used in the mapping scheme, e.g., Euclidean distance or cosine similarity on the vector representations of tokens (Mikolov et al., 2013; Pennington et al., 2014; Mrksic et al., 2016). Generally, according to the correlation between scores and semantic closeness, all the similarity measures can be categorized into two types, i.e., negative correlation and positive correlation. For instance, Euclidean distance and cosine similarity are negative and positive correlation measures, respectively, as a smaller Euclidean distance and a larger cosine value between two vectors imply higher semantic closeness of their corresponding tokens. Next, we will design scoring functions for both types of similarity measures.
118
+
119
+ Scoring Function for Negative Correlation Measures. We take Euclidean distance as an example to design the scoring function $u: \mathcal{X}' \times \mathcal{Y}' \to \mathbb{R}$ . For any input set $\mathcal{X}'$ and its corresponding output set $\mathcal{Y}'$ , we first compute the Euclidean distance $d(x, y)$
120
+
121
+ Algorithm 2 Document Sanitization
122
+ Input: Original document $D = \langle R_i\rangle_{i = 1}^m$ sampling function $f_{\mathrm{sample}}$ ,stopword list $T$
123
+ Output: Sanitized document $D^{\prime}$
124
+ 1: Initialize the sanitized document $D^{\prime} = \emptyset$
125
+ 2: for all record $R\in D$ do
126
+ 3: Initialize the sanitized record $R^{\prime} = \emptyset$
127
+ 4: for all token $x\in R$ do
128
+ 5: if CusText $^+$ is used and $x\in T$ then
129
+ 6: Append $x$ to $R^{\prime}$
130
+ 7: else
131
+ 8: $x^{\prime}\gets f_{\mathrm{sample}}(x)$ and append $x$ to $R^{\prime}$
132
+ 9: Add $R^{\prime}$ to $D^{\prime}$
133
+ 10: return $D^{\prime}$
134
+
135
+ between each $x \in \mathcal{X}'$ and $y \in \mathcal{Y}'$ . Specifically, we have $d(x,y) = \| \Phi(x) - \Phi(y) \|_2$ , where $\Phi(x)$ and $\Phi(y)$ are the vector representations of $x$ and $y$ , respectively. Then, we normalize the distances of all pairs of tokens to the range $[0,1]$ as $d'(x,y) = \frac{d(x,y) - d_{min}}{d_{max} - d_{min}}$ , where $d_{min} = \min_{x \in \mathcal{X}', y \in \mathcal{Y}'} d(x,y)$ and $d_{max} = \max_{x \in \mathcal{X}', y \in \mathcal{Y}'} d(x,y)$ . Finally, we transform the normalized distance $d'(x,y)$ into the score of output token $y$ for input token $x$ as $u(x,y) = -d'(x,y)$ . After the above transformation, a more similar pair $x,y$ of input and output tokens has a higher score $u(x,y)$ . Finally, by repeating the above steps on all disjoint partitions of adjacent tokens with the same $\mathcal{X}'$ and $\mathcal{Y}'$ , we have obtained the scoring functions for all tokens.
136
+
137
+ Scoring Function for Positive Correlation Measures. We take cosine similarity as another example to design the scoring function $u$ . For any input set $\mathcal{X}'$ and its corresponding output set $\mathcal{Y}'$ , we also compute the cosine similarity $\cos(x, y)$ between each $x \in \mathcal{X}'$ and $y \in \mathcal{Y}'$ , where $\cos(x, y) = \frac{\langle \Phi(x), \Phi(y) \rangle}{\|\Phi(x)\| \cdot \|\Phi(y)\|}$ and $\Phi(x)$ and $\Phi(y)$ are the vector representations of $x$ and $y$ . Then, the normalization procedure is the same as that for Euclidean distance, but we use the normalized distance, instead of its additive inverse, in the score function, i.e., $u(x, y) = \frac{d(x, y) - d_{\min}}{d_{\max} - d_{\min}}$ . Finally, we repeat the above steps on all disjoint partitions of adjacent tokens to obtain all scoring functions.
138
+
139
+ Sampling Procedure. After acquiring the scoring function $u$ for each input token $x$ , the sampling function $f_{\mathrm{sample}}$ is used to generate the sanitized token $x'$ for $x$ based on the exponential mechanism $\mathcal{M}(\{x\}, u, \mathcal{Y}')$ with a privacy parameter $\epsilon > 0$ . The pseudocode of sanitizing a document based on $f_{\mathrm{sample}}$ is provided in Algorithm 2. Theoretically, it guarantees that $f_{\mathrm{sample}}$ satisfies $\epsilon$ -DP. For any input set $\mathcal{X}'$ and its corresponding output set $\mathcal{Y}'$ ,
140
+
141
+ the sensitivity $\Delta u$ between any two adjacent input tokens $x, x' \in \mathcal{X}'$ is bound by 1 according to the design of the scoring function $u$ , i.e.,
142
+
143
+ $$
144
+ \Delta u = \max _ {y \in \mathcal {Y} ^ {\prime}} \max _ {x, x ^ {\prime} \in \mathcal {X} ^ {\prime}} | u (x, y) - u (x ^ {\prime}, y) | = 1
145
+ $$
146
+
147
+ Given a privacy parameter $\epsilon > 0$ , the probability of obtaining an output token $y \in \mathcal{Y}'$ for an input token $x \in \mathcal{X}'$ is as follows:
148
+
149
+ $$
150
+ \operatorname * {P r} [ f _ {\mathrm {s a m p l e}} (x) = y ] = \frac {\exp \left(\frac {\epsilon u (x , y)}{2 \Delta u}\right)}{\sum_ {y ^ {\prime} \in \mathcal {Y} ^ {\prime}} \exp \left(\frac {\epsilon u (x , y ^ {\prime})}{2 \Delta u}\right)}
151
+ $$
152
+
153
+ We can prove that the sampling function $f_{\mathrm{sample}}$ satisfies $\epsilon$ -DP because, for any two input tokens $x, x' \in \mathcal{X}'$ and output token $y \in \mathcal{Y}'$ , it holds that
154
+
155
+ $$
156
+ \begin{array}{l} \frac {\operatorname * {P r} [ f _ {\text {s a m p l e}} (x) = y ]}{\operatorname * {P r} [ f _ {\text {s a m p l e}} (x ^ {\prime}) = y ]} = \frac {\frac {\exp (\frac {\epsilon u (x , y)}{2 \Delta u})}{\sum_ {y ^ {\prime} \in \mathcal {Y} ^ {\prime}} \exp (\frac {\epsilon u (x , y ^ {\prime})}{2 \Delta u})}}{\frac {\exp (\frac {\epsilon u (x ^ {\prime} , y)}{2 \Delta u})}{\sum_ {y ^ {\prime} \in \mathcal {Y} ^ {\prime}} \exp (\frac {\epsilon u (x ^ {\prime} , y ^ {\prime})}{2 \Delta u})}} \\ = e ^ {\frac {\epsilon \cdot (u (x , y) - u (x ^ {\prime} , y))}{2 \Delta u}} \cdot \Big (\frac {\sum_ {y ^ {\prime} \in \mathcal {Y} ^ {\prime}} \exp \big (\frac {\epsilon u (x ^ {\prime} , y ^ {\prime})}{2 \Delta u} \big)}{\sum_ {y ^ {\prime} \in \mathcal {Y} ^ {\prime}} \exp \big (\frac {\epsilon u (x , y ^ {\prime})}{2 \Delta u} \big)} \Big) \\ \leq e ^ {\frac {\epsilon}{2}} \cdot e ^ {\frac {\epsilon}{2}} \cdot \left(\frac {\sum_ {y ^ {\prime} \in \mathcal {Y} ^ {\prime}} \exp (\frac {\epsilon u (x , y ^ {\prime})}{2 \Delta u})}{\sum_ {y ^ {\prime} \in \mathcal {Y} ^ {\prime}} \exp (\frac {\epsilon u (x , y ^ {\prime})}{2 \Delta u})}\right) = e ^ {\epsilon}. \\ \end{array}
157
+ $$
158
+
159
+ # 4.3 The CusText+ Mechanism
160
+
161
+ Since not all tokens contain sensitive information, our CusText mechanism that replaces all tokens might be over-protective. Therefore, we can retain non-sensitive original tokens with low privacy risk (e.g., stopwords) to improve the utility of the sanitized text. In practice, we have a predefined list of stopwords $T$ (e.g., the collection of stopwords in the NLTK library), check whether each token $x$ is included in $T$ , and keep $x$ in the sanitized document if $x \in T$ or replace $x$ with $x' = f_{\text{sample}}(x)$ otherwise. The above procedure is called the CusText+ mechanism and is also described in Algorithm 2.
162
+
163
+ # 5 Experiments
164
+
165
+ # 5.1 Experimental Setup
166
+
167
+ Following (Feyisetan et al., 2020; Yue et al., 2021), we choose two datasets from the GLUE benchmark (Wang et al., 2019) and one medical dataset MedSTS (Wang et al., 2020), which all contain sensitive information, in our experiments. Detailed descriptions of the three datasets are as follows:
168
+
169
+ - SST-2 is a popular movie reviews dataset with 67k training samples and 1.8k test samples
170
+
171
+ for sentiment classification, where accuracy is used as the evaluation metric.
172
+
173
+ - MedSTS is a medical dataset with 1,642 training samples and 412 test samples for semantic similarity computation, where Pearson correlation coefficient is used for evaluation.
174
+ - QNLI is a sentence dataset with 105k training samples and 5.2k test samples for sentence-pair classification, where accuracy is used as the evaluation metric.
175
+
176
+ In the experiments, we compare CusText with two existing text sanitization mechanisms, i.e., FBDD (Feyisetan et al., 2020) and SANTEXT (Yue et al., 2021). In the training phase, we perform each mechanism to sanitize the training data and then use the sanitized documents to fine-tune the pre-trained model. In the evaluation phase, we sanitize the test data by the same mechanism as used for training. When producing the sanitized documents, both the input set $\mathcal{X}$ and output set $\mathcal{Y}$ are assigned to the vocabulary of Counter-Fitting (Mrksic et al., 2016) (of size 65,713), and out-of-vocabulary (OOV) tokens except numbers are retained. For a fair comparison, we adopt the same vocabulary in GloVe (Pennington et al., 2014) as in Counter-Fitting. The Euclidean distance and cosine similarity are used as the similarity measures for GloVe and Counter-Fitting, respectively. We use the stopwords list in NLTK for CusText+. For each downstream task, we set the maximum sequence length to 128 and the training epoch to 3. On the SST2 and QNLI datasets, we set the batch size to 64 and the learning rate to $2 \times 10^{-5}$ using bert-base-uncased $^4$ as the pre-trained model. On the MedSTS dataset, we set the batch size to 8 and the learning rate to $5 \times 10^{-5}$ using ClinicalBERT (Alsentzer et al., 2019) as the pre-trained model. Other hyperparameters are the same as those used in the default Transformer model (Wolf et al., 2020). All experiments were conducted on a server with two Intel Xeon Silver 4210R 2.40GHz CPUs and one NVIDIA Tesla V100 SXM2 (32GB).
177
+
178
+ # 5.2 Experimental Results
179
+
180
+ Comparison of Different Mechanisms for Text Sanitization. In this experiment, we fix the customization parameter $K$ to 20 in CusText and CusText+ and vary the privacy parameter $\epsilon = 1,2,3$
181
+
182
+ <table><tr><td rowspan="2">Mechanisms</td><td colspan="3">SST2</td><td colspan="3">MedSTS</td><td colspan="3">QNLI</td></tr><tr><td>ε = 1</td><td>ε = 2</td><td>ε = 3</td><td>ε = 1</td><td>ε = 2</td><td>ε = 3</td><td>ε = 1</td><td>ε = 2</td><td>ε = 3</td></tr><tr><td>Random</td><td colspan="3">0.5014</td><td colspan="3">0.0382</td><td colspan="3">0.5037</td></tr><tr><td>FBDD</td><td>0.5022</td><td>0.5041</td><td>0.5032</td><td>0.0321</td><td>0.0368</td><td>0.0411</td><td>0.5021</td><td>0.5152</td><td>0.5368</td></tr><tr><td>SANTEXT</td><td>0.5014</td><td>0.4827</td><td>0.5091</td><td>0.0850</td><td>0.1673</td><td>0.1124</td><td>0.5304</td><td>0.5302</td><td>0.5357</td></tr><tr><td>CusText</td><td>0.6985</td><td>0.7172</td><td>0.7029</td><td>0.4957</td><td>0.5112</td><td>0.5242</td><td>0.6926</td><td>0.6884</td><td>0.7133</td></tr><tr><td>SANTEXT+</td><td>0.7211</td><td>0.7446</td><td>0.7260</td><td>0.4143</td><td>0.4271</td><td>0.5423</td><td>0.7607</td><td>0.7636</td><td>0.7493</td></tr><tr><td>CusText+</td><td>0.7501</td><td>0.7452</td><td>0.7683</td><td>0.6172</td><td>0.6316</td><td>0.6213</td><td>0.7528</td><td>0.7602</td><td>0.7740</td></tr><tr><td>Original</td><td colspan="3">0.9050</td><td colspan="3">0.7598</td><td colspan="3">0.9096</td></tr></table>
183
+
184
+ Table 1: Utility comparison of different sanitization mechanisms at similar privacy levels.
185
+
186
+ for DP. The evaluation of the effect of $K$ on the performance of CusText will be presented later. Furthermore, we choose GloVe as the token embedding in CusText and CusText+ for a fair comparison since FBDD, SANTEXT, and SANTEXT+ cannot apply the Counter-Fitting embedding. This is because they only work with metric distances (e.g., Euclidean distance in GloVe) due to the inherent limitation of MLDP and thus cannot handle the non-metric cosine similarity in Counter-Fitting. Finally, because a mechanism will be $\epsilon$ -DP if it is $\epsilon'$ -MLDP (Chatzikokolakis et al., 2013), where $\epsilon = \epsilon' \cdot d_{max}$ and $d_{max} = \max_{x \in \mathcal{X}, y \in \mathcal{Y}} d(x, y)$ , we re-scale the privacy parameter $\epsilon$ in FBDD, SANTEXT, and SANTEXT+ with $d_{max}$ to align their privacy levels to be similar to our mechanisms.
187
+
188
+ Table 1 presents the utilities of different text sanitization mechanisms with $\epsilon$ -DP ( $\epsilon = 1,2,3$ ) on three datasets. The results demonstrate the huge advantages of CusText compared with two existing mechanisms, i.e., FBDD and SANTEXT, which achieves over $20\%$ improvements in accuracy on the SST-2 and QNLI datasets and more than $50\%$ improvement in Pearson correlation coefficient on the MedSTS dataset. Compared with SANTEXT and CusText, their improved versions, i.e., SANTEXT+ and CusText+, exhibit significantly better performance because they keep some original tokens to preserve original semantics. Generally, the results indicate the superior performance of CusText by showing that using a customized, smaller output set for each input token can lead to better utilities at similar (theoretical) privacy levels.
189
+
190
+ Privacy-Utility Trade-off. Subsequently, we compare SANTEXT and CusText in terms of privacy-utility trade-offs. As shown in (Yue et al., 2021) as well as our previous results, FBDD has lower performance than SANTEXT and CusText and thus is not compared in the remaining experiments anymore. To alleviate the effects of different DP definitions in SANTEXT and CusText, we do not use
191
+
192
+ ![](images/d451f46a1f83aa5094c0a7a43499440083426c733ecbd8892df63958e3734caa.jpg)
193
+ Figure 4: Privacy-utility trade-offs in terms of success rates of mask token inference attacks vs. accuracy rates by varying the privacy parameter $\epsilon \in [0.01, 50]$ on the SST-2 dataset. Here, "Original" denotes the result on unsanitized data.
194
+
195
+ the privacy parameter $\epsilon$ , which corresponds to the worst possible privacy leakage but may not reveal the privacy protection level in practice. Alternatively, we adopt two privacy attacks to evaluate the privacy protection levels: One is the Mask Token Inference Attack in (Yue et al., 2021), and the other is Query Attack proposed in this work.
196
+
197
+ We first present the results for mask token inference attacks. To recover raw texts from sanitized texts, an adversary can use the pre-trained BERT model to help infer the original tokens since it is trained via masked language modeling. It replaces each token with a special token "[MASK]" in the sanitized text sequentially, inputs the masked text to BERT, and acquires the predicted output of "[MASK]" as the original token. Then, we consider the attack successful if the output token is the same as the input. Finally, we compute the success rate among all attacks, denoted as $r_{mask}$ , and define the privacy protection level as $1 - r_{mask}$ .
198
+
199
+ Figure 4 illustrates the privacy-utility trade-offs of CusText (based on GloVe and Counter-Fitting, respectively) and SANTEXT (based on GloVe) by varying the value of $\epsilon$ on the SST-2 dataset. The
200
+
201
+ <table><tr><td rowspan="2">Token</td><td colspan="3">SANTEXT</td><td colspan="4">CusText (GloVe)</td><td colspan="4">CusText (Counter-Fitting)</td></tr><tr><td>ε&#x27; = 1</td><td>ε&#x27; = 2</td><td>ε&#x27; = 3</td><td>ε = 1</td><td>ε = 2</td><td>ε = 3</td><td>ε = 8</td><td>ε = 1</td><td>ε = 2</td><td>ε = 3</td><td>ε = 8</td></tr><tr><td>she</td><td>2350</td><td>35</td><td>4</td><td>1000</td><td>200</td><td>80</td><td>5</td><td>5500</td><td>1000</td><td>320</td><td>4</td></tr><tr><td>car</td><td>1300</td><td>14</td><td>1</td><td>1220</td><td>250</td><td>90</td><td>6</td><td>420000</td><td>90000</td><td>31000</td><td>3200</td></tr><tr><td>alice</td><td>1550</td><td>20</td><td>3</td><td>1190</td><td>240</td><td>100</td><td>6</td><td>1700</td><td>360</td><td>120</td><td>9</td></tr><tr><td>happy</td><td>3200</td><td>55</td><td>4</td><td>1490</td><td>290</td><td>110</td><td>8</td><td>320000</td><td>55000</td><td>21500</td><td>1500</td></tr><tr><td>Accuracy</td><td>0.4959</td><td>0.5799</td><td>0.7958</td><td>0.6985</td><td>0.7172</td><td>0.7029</td><td>0.8155</td><td>0.7117</td><td>0.7370</td><td>0.7298</td><td>0.7957</td></tr></table>
202
+
203
+ Table 2: Results for query attacks on four selected tokens in the SST-2 dataset.
204
+
205
+ results confirm that CusText achieves better utility-privacy trade-offs than SANTEXT and remains a relatively good utility (accuracy at around 0.7) when the privacy level approaches 1 (over 0.98). In comparison, SANTEXT degenerates to a random classifier (accuracy at around 0.5). Meanwhile, the results also imply that Counter-Fitting works better with CusText than GloVe. The higher performance of Counter-Fitting can be attributed to its better representations of synonyms.
206
+
207
+ We then describe the results for query attacks. Since the input token is contained in its corresponding output set and always has the highest score, the probability that it is sampled by $f_{\text{sample}}$ is also the highest among all output tokens. An adversary can determine the input token by querying the data owner for the sanitized document multiple times, as the input token will have the highest frequency among all output tokens after a sufficiently large number of queries. Thus, we use the smallest number $N$ of queries an adversary needs to infer the input token at a confidence level of $95\%$ as a new measure of the privacy protection level. Here, the larger the value of $N$ is, the higher the level of privacy protection is. In the experiment, we obtain the value of $N$ by using the Monte Carlo method (Gentle, 2009) to sample the output tokens until the confidence level of determining the input token from the output distribution reaches $95\%$ .
208
+
209
+ Table 2 further confirms that CusText achieves better privacy-utility trade-offs than SANTEXT. Although SANTEXT achieves a good utility when $\epsilon' = 3$ (i.e., with 3-MLDP), it almost provides no privacy protection as input tokens can be inferred by performing only a few queries. CusText (with either GloVe or Counter-Fitting) remains relatively good privacy protection levels when $\epsilon = 3$ (i.e., with 3-DP) while achieving high utilities. Generally, Counter-Fitting also outperforms GloVe for CusText. But the privacy protections for different tokens vary very much for Counter-Fitting: "she" and "alice" are more vulnerable than "car" and "happy". This is because "she" and "alice" are
210
+
211
+ ![](images/67b6ea87cd41ddef2caa56b944af681671aa74c9dd0588c05ef04fc32773d09e.jpg)
212
+ Figure 5: Privacy-utility trade-offs of CusText with different customization parameters $K$ by varying the privacy parameter $\epsilon \in [0.001, 50]$ on the SST-2 dataset.
213
+
214
+ mapped with semantically less relevant tokens than themselves in the mapping function generation.
215
+
216
+ Effect of $K$ on CusText. To test the effect of $K$ on CusText in practice, we study the privacy-utility trade-offs with different customization parameters $K = 5,20,50$ on the SST-2 dataset. We choose the mask token inference attack as the privacy metric since its performance is more semantically related. Then, we use Counter-Fitting for its better performance than GloVe, as depicted previously.
217
+
218
+ The results for different $K$ 's are presented in Figure 5. We observe that the performance of CusText is generally stable for different $K$ 's. But it achieves slightly better utilities when $K$ is smaller at relatively higher privacy protection levels ( $>0.9$ ). This is because, on the one hand, the semantic similarity of output tokens to the input token will be higher when $K$ is smaller. However, on the other hand, a smaller $K$ will also make it easier to infer the input token, thus lowering the privacy protection levels (e.g., for $K = 5$ , it does not exceed 0.96 even when $\epsilon$ has been decreased to 0.001).
219
+
220
+ # 6 Concluding Remarks
221
+
222
+ In this work, we study the problem of differentially private text sanitization. We propose a novel Cus-Text mechanism consisting of a mapping scheme
223
+
224
+ to assign each input token a customized output set and sampling function generation methods based on the mapping scheme and exponential mechanism to reduce privacy costs while improving the utilities of sanitized texts. Extensive experiments demonstrate that CusText achieves better privacy-utility trade-offs than state-of-the-art text sanitization mechanisms. In the future, we will explore how to improve our mechanism by adaptively allocating privacy costs across tokens and find a better way to decide whether a token is sensitive than based on a pre-defined stopwords list.
225
+
226
+ # Acknowledgements
227
+
228
+ This work was supported by the National Natural Science Foundation of China (under Grant numbers 62202170, 62202169) and Alibaba Group through the Alibaba Innovation Research Program.
229
+
230
+ # Limitations
231
+
232
+ First, as indicated in Table 2, different tokens are not equally vulnerable to privacy attacks. As such, assigning every token with the same output size $K$ and privacy parameter $\epsilon$ might not be an ideal choice. An improved method would be to adaptively allocate privacy costs across tokens so that all of them are adequately protected. Second, we adopt two simple strategies to decide whether a token is sensitive: assuming all tokens are sensitive or based on a pre-defined stopwords list. However, the prior might be over-protective, but the latter can lead to privacy leakage since stopwords might help infer other sanitized tokens. Therefore, a more flexible and practical way to decide the sensitivity of tokens is required.
233
+
234
+ # References
235
+
236
+ Martín Abadi, Andy Chu, Ian J. Goodfellow, H. Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang. 2016. Deep learning with differential privacy. In CCS, pages 308-318.
237
+ Emily Alsentzer, John Murphy, William Boag, WeiHung Weng, Di Jindi, Tristan Naumann, and Matthew McDermott. 2019. Publicly available clinical BERT embeddings. In Proceedings of the 2nd Clinical Natural Language Processing Workshop, pages 72-78.
238
+ Rohan Anil, Badih Ghazi, Vineet Gupta, Ravi Kumar, and Pasin Manurangsi. 2022. Large-scale differentially private BERT. In EMNLP (Findings), pages 6481-6491.
239
+
240
+ Nicholas Carlini, Chang Liu, Ülfar Erlingsson, Jernej Kos, and Dawn Song. 2019. The secret sharer: Evaluating and testing unintended memorization in neural networks. In USENIX Security Symposium, pages 267-284.
241
+ Nicholas Carlini, Florian Tramér, Eric Wallace, Matthew Jagielski, Ariel Herbert-Voss, Katherine Lee, Adam Roberts, Tom B. Brown, Dawn Song, Ulfar Erlingsson, Alina Oprea, and Colin Raffel. 2021. Extracting training data from large language models. In USENIX Security Symposium, pages 2633-2650.
242
+ Konstantinos Chatzikokolakis, Miguel E. Andres, Nicolás Emilio Bordenabe, and Catuscia Palamidessi. 2013. Broadening the scope of differential privacy using metrics. In Privacy Enhancing Technologies (PETS), pages 82-102.
243
+ John C. Duchi, Michael I. Jordan, and Martin J. Wainwright. 2013. Local privacy and statistical minimax rates. In FOCS, pages 429-438.
244
+ Christophe Dupuy, Radhika Arava, Rahul Gupta, and Anna Rumshisky. 2022. An efficient DP-SGD mechanism for large scale NLU models. In ICASSP, pages 4118-4122.
245
+ Cynthia Dwork, Frank McSherry, Kobbi Nissim, and Adam D. Smith. 2006. Calibrating noise to sensitivity in private data analysis. In Theory of Cryptography (TCC), pages 265-284.
246
+ Oluwaseyi Feyisetan, Borja Balle, Thomas Drake, and Tom Diethe. 2020. Privacy- and utility-preserving textual analysis via calibrated multivariate perturbations. In WSDM, pages 178-186.
247
+ Oluwaseyi Feyisetan, Tom Diethe, and Thomas Drake. 2019. Leveraging hierarchical representations for preserving privacy and utility in text. In ICDM, pages 210-219.
248
+ James E. Gentle. 2009. Monte Carlo methods for statistical inference. In Computational Statistics, pages 417-433. Springer.
249
+ Jack Hessel and Alexandra Schofield. 2021. How effective is BERT without word ordering? Implications for language understanding and data privacy. In ACL/IJCNLP (Short Papers), pages 204-211.
250
+ Marija Jegorova, Chaitanya Kaul, Charlie Mayor, Alison Q. O'Neil, Alexander Weir, Roderick Murray-Smith, and Sotirios A. Tsaftaris. 2021. Survey: Leakage and privacy at inference time. arXiv:2107.01614.
251
+ Ninghui Li, Tiancheng Li, and Suresh Venkatasubramanian. 2007. t-closeness: Privacy beyond k-anonymity and l-diversity. In ICDE, pages 106-115.
252
+ Xuechen Li, Florian Tramér, Percy Liang, and Tatsunori Hashimoto. 2022. Large language models can be strong differentially private learners. In *ICLR*.
253
+
254
+ Lingjuan Lyu, Xuanli He, and Yitong Li. 2020. Differentially private representation for NLP: Formal guarantee and an empirical study on privacy and fairness. In EMNLP (Findings), pages 2355-2365.
255
+ Ashwin Machanavajjhala, Daniel Kifer, Johannes Gehrke, and Muthuramakrishnan Venkitasubramaniam. 2007. L-diversity: Privacy beyond k-anonymity. ACM Trans. Knowl. Discov. Data, 1(1):3:1-3:52.
256
+ Frank McSherry and Kunal Talwar. 2007. Mechanism design via differential privacy. In *FOCS*, pages 94-103.
257
+ Tomás Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. arXiv:1301.3781.
258
+ Fatemehsadat Mireshghallah, Huseyin A. Inan, Marcello Hasegawa, Victor Ruhle, Taylor Berg-Kirkpatrick, and Robert Sim. 2021. Privacy regularization: Joint privacy-utility optimization in LanguageModels. In NAACL-HLT, pages 3799-3807.
259
+ Nikola Mrksic, Diarmuid Řéaghdha, Blaise Thomson, Milica Gasic, Lina Maria Rojas-Barahona, Pei-Hao Su, David Vandyke, Tsung-Hsien Wen, and Steve J. Young. 2016. Counter-fitting word vectors to linguistic constraints. In *NAACL-HLT*, pages 142-148.
260
+ Takao Murakami and Yusuke Kawamoto. 2019. Utility-optimized local differential privacy mechanisms for distribution estimation. In USENIX Security Symposium, pages 1877-1894.
261
+ Yiwen Nie, Wei Yang, Liusheng Huang, Xike Xie, Zhenhua Zhao, and Shaowei Wang. 2019. A utility-optimized framework for personalized private histogram estimation. IEEE Trans. Knowl. Data Eng., 31(4):655-669.
262
+ Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. GloVe: Global vectors for word representation. In EMNLP, pages 1532-1543.
263
+ Chen Qu, Weize Kong, Liu Yang, Mingyang Zhang, Michael Bendersky, and Marc Najork. 2021. Natural language understanding with privacy-preserving BERT. In CIKM, pages 1488-1497.
264
+ Gerard Salton and Chris Buckley. 1988. Term-weighting approaches in automatic text retrieval. Inf. Process. Manag., 24(5):513-523.
265
+ Congzheng Song and Ananth Raghunathan. 2020. Information leakage in embedding models. In CCS, pages 377-390.
266
+ Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2019. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In ICLR.
267
+
268
+ Yanshan Wang, Naveed Afzal, Sunyang Fu, Liwei Wang, Feichen Shen, Majid Rastegar-Mojarad, and Hongfang Liu. 2020. MedSTS: a resource for clinical semantic textual similarity. Lang. Resour. Eval., 54(1):57-72.
269
+ Thomas Wolf, Lysandre Debut, et al. 2020. Transformers: State-of-the-art natural language processing. In EMNLP (Demos), pages 38-45.
270
+ Xiang Yue, Minxin Du, Tianhao Wang, Yaliang Li, Huan Sun, and Sherman S. M. Chow. 2021. Differential privacy for text analytics via natural text sanitization. In ACL/IJCNLP (Findings), pages 3853-3866.
271
+ Ying Zhao and Jinjun Chen. 2022. A survey on differential privacy for unstructured data content. ACM Comput. Surv., 54(10s):207:1-207:28.
272
+
273
+ # A For every submission:
274
+
275
+ A1. Did you describe the limitations of your work?
276
+
277
+ Left blank.
278
+
279
+ A2. Did you discuss any potential risks of your work?
280
+
281
+ Left blank.
282
+
283
+ A3. Do the abstract and introduction summarize the paper's main claims?
284
+
285
+ Left blank.
286
+
287
+ □ A4. Have you used AI writing assistants when working on this paper?
288
+
289
+ Left blank.
290
+
291
+ # B Did you use or create scientific artifacts?
292
+
293
+ Left blank.
294
+
295
+ B1. Did you cite the creators of artifacts you used?
296
+
297
+ Left blank.
298
+
299
+ B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
300
+
301
+ Left blank.
302
+
303
+ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)?
304
+
305
+ Left blank.
306
+
307
+ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it?
308
+
309
+ Left blank.
310
+
311
+ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.?
312
+
313
+ Left blank.
314
+
315
+ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be.
316
+
317
+ Left blank.
318
+
319
+ # C Did you run computational experiments?
320
+
321
+ Left blank.
322
+
323
+ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used?
324
+
325
+ Left blank.
326
+
327
+ The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
328
+
329
+ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Left blank.
330
+ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Left blank.
331
+ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Left blank.
332
+
333
+ # D Did you use human annotators (e.g., crowdworkers) or research with human participants?
334
+
335
+ Left blank.
336
+
337
+ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? Left blank.
338
+ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)? Left blank.
339
+ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? Left blank.
340
+ D4. Was the data collection protocol approved (or determined exempt) by an ethics review board? Left blank.
341
+ D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? Left blank.
2023/A Customized Text Sanitization Mechanism with Differential Privacy/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5d6150c2ce4e8984a08ef76cb37d7df0cea250cde437947fe9fdfa1cad94f01b
3
+ size 279500
2023/A Customized Text Sanitization Mechanism with Differential Privacy/layout.json ADDED
The diff for this file is too large to render. See raw diff
 
2023/A Diffusion Model for Event Skeleton Generation/98150961-6381-4a26-81e7-35b4d180926d_content_list.json ADDED
@@ -0,0 +1,2052 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "type": "text",
4
+ "text": "A Diffusion Model for Event Skeleton Generation",
5
+ "text_level": 1,
6
+ "bbox": [
7
+ 240,
8
+ 90,
9
+ 754,
10
+ 109
11
+ ],
12
+ "page_idx": 0
13
+ },
14
+ {
15
+ "type": "text",
16
+ "text": "Fangqi Zhu $^{1,3*}$ , Lin Zhang $^{3}$ , Jun Gao $^{1}$ , Bing Qin $^{1}$ , Ruifeng Xu $^{1,2\\dagger}$ , Haiqin Yang $^{3\\dagger}$",
17
+ "bbox": [
18
+ 166,
19
+ 129,
20
+ 847,
21
+ 148
22
+ ],
23
+ "page_idx": 0
24
+ },
25
+ {
26
+ "type": "text",
27
+ "text": "<sup>1</sup> Harbin Institute of Technology, Shenzhen, China",
28
+ "bbox": [
29
+ 314,
30
+ 148,
31
+ 687,
32
+ 164
33
+ ],
34
+ "page_idx": 0
35
+ },
36
+ {
37
+ "type": "text",
38
+ "text": "2 Guangdong Provincial Key Laboratory of Novel Security Intelligence Technologies",
39
+ "bbox": [
40
+ 186,
41
+ 164,
42
+ 815,
43
+ 181
44
+ ],
45
+ "page_idx": 0
46
+ },
47
+ {
48
+ "type": "text",
49
+ "text": "$^{3}$ International Digital Economy Academy (IDEA)",
50
+ "bbox": [
51
+ 312,
52
+ 181,
53
+ 689,
54
+ 197
55
+ ],
56
+ "page_idx": 0
57
+ },
58
+ {
59
+ "type": "text",
60
+ "text": "zhufangqi hitsz@gmail.com, xuruifeng@hit.edu.cn, hqyang@ieee.org",
61
+ "bbox": [
62
+ 176,
63
+ 198,
64
+ 825,
65
+ 214
66
+ ],
67
+ "page_idx": 0
68
+ },
69
+ {
70
+ "type": "text",
71
+ "text": "Abstract",
72
+ "text_level": 1,
73
+ "bbox": [
74
+ 260,
75
+ 252,
76
+ 339,
77
+ 266
78
+ ],
79
+ "page_idx": 0
80
+ },
81
+ {
82
+ "type": "text",
83
+ "text": "Event skeleton generation, aiming to induce an event schema skeleton graph with abstracted event nodes and their temporal relations from a set of event instance graphs, is a critical step in the temporal complex event schema induction task. Existing methods effectively address this task from a graph generation perspective but suffer from noise-sensitive and error accumulation, e.g., the inability to correct errors while generating schema. We, therefore, propose a novel Diffusion Event Graph Model (DEGM) to address these issues. Our DEGM is the first workable diffusion model for event skeleton generation, where the embedding and rounding techniques with a custom edge-based loss are introduced to transform a discrete event graph into learnable latent representation. Furthermore, we propose a denoising training process to maintain the model's robustness. Consequently, DEGM derives the final schema, where error correction is guaranteed by iteratively refining the latent representation during the schema generation process. Experimental results on three IED bombing datasets demonstrate that our DEGM achieves better results than other state-of-the-art baselines. Our code and data are available at https://github.com/zhufq00/EventSkeletonGeneration.",
84
+ "bbox": [
85
+ 141,
86
+ 282,
87
+ 460,
88
+ 693
89
+ ],
90
+ "page_idx": 0
91
+ },
92
+ {
93
+ "type": "text",
94
+ "text": "1 Introduction",
95
+ "text_level": 1,
96
+ "bbox": [
97
+ 114,
98
+ 709,
99
+ 258,
100
+ 724
101
+ ],
102
+ "page_idx": 0
103
+ },
104
+ {
105
+ "type": "text",
106
+ "text": "Event schema induction is to identify common patterns and structures in event data, which can extract high-level representation of the events. Current event schema induction tasks mainly focus on simple event schemas, e.g., templates (Chambers, 2013) and scripts (Chambers and Jurafsky, 2009). However, real-world events are usually more complex, which include multiple atomic events, entities, and their relations, which require more advanced",
107
+ "bbox": [
108
+ 112,
109
+ 737,
110
+ 489,
111
+ 882
112
+ ],
113
+ "page_idx": 0
114
+ },
115
+ {
116
+ "type": "image",
117
+ "img_path": "images/78b97d1808c819b04a19e048949b69c19729cd6dc73515e4e75d795e353de76c.jpg",
118
+ "image_caption": [
119
+ "Figure 1: An illustrated example demonstrates the utilization of multiple instance graphs extracted from news articles depicting complex events to generate an event schema skeleton graph for the complex event type Car bombing. The presented instance graph specifically represents the complex event known as the Kabul ambulance bombing. A circle symbolizes an atomic event."
120
+ ],
121
+ "image_footnote": [],
122
+ "bbox": [
123
+ 551,
124
+ 250,
125
+ 845,
126
+ 481
127
+ ],
128
+ "page_idx": 0
129
+ },
130
+ {
131
+ "type": "text",
132
+ "text": "techniques to adequately capture and represent the different aspects and relations involved.",
133
+ "bbox": [
134
+ 507,
135
+ 637,
136
+ 880,
137
+ 668
138
+ ],
139
+ "page_idx": 0
140
+ },
141
+ {
142
+ "type": "text",
143
+ "text": "Recently, Li et al. (2021) propose the temporal complex event schema induction task in order to understand these complex events. The task seeks to abstract a general evolution pattern for complex events from multiple event instance graphs. It is divided into two subtasks: event skeleton generation and entity-entity relation completion. The first task focuses on creating the event skeleton, i.e., representing each atomic event with its associated event type as an event node and exploring their temporal relations. The second one is to complete entities and entity links for the event skeleton. In this paper, we focus on event skeleton generation as it is a prerequisite yet formidable task in temporal complex event schema induction. Figure 1 illustrates",
144
+ "bbox": [
145
+ 507,
146
+ 677,
147
+ 882,
148
+ 917
149
+ ],
150
+ "page_idx": 0
151
+ },
152
+ {
153
+ "type": "page_footnote",
154
+ "text": "*Work done when Fangqi was interned at IDEA.",
155
+ "bbox": [
156
+ 134,
157
+ 891,
158
+ 431,
159
+ 904
160
+ ],
161
+ "page_idx": 0
162
+ },
163
+ {
164
+ "type": "page_footnote",
165
+ "text": "† Corresponding authors.",
166
+ "bbox": [
167
+ 136,
168
+ 904,
169
+ 287,
170
+ 917
171
+ ],
172
+ "page_idx": 0
173
+ },
174
+ {
175
+ "type": "page_number",
176
+ "text": "12630",
177
+ "bbox": [
178
+ 475,
179
+ 927,
180
+ 524,
181
+ 940
182
+ ],
183
+ "page_idx": 0
184
+ },
185
+ {
186
+ "type": "footer",
187
+ "text": "Findings of the Association for Computational Linguistics: ACL 2023, pages 12630-12641",
188
+ "bbox": [
189
+ 220,
190
+ 945,
191
+ 774,
192
+ 958
193
+ ],
194
+ "page_idx": 0
195
+ },
196
+ {
197
+ "type": "footer",
198
+ "text": "July 9-14, 2023 ©2023 Association for Computational Linguistics",
199
+ "bbox": [
200
+ 295,
201
+ 958,
202
+ 699,
203
+ 971
204
+ ],
205
+ "page_idx": 0
206
+ },
207
+ {
208
+ "type": "text",
209
+ "text": "an example of instance graphs<sup>1</sup> and the corresponding abstracted schema. Both include abstract event types, such as Attack, and their temporal relations, like Injure happening after Attack.",
210
+ "bbox": [
211
+ 112,
212
+ 84,
213
+ 487,
214
+ 148
215
+ ],
216
+ "page_idx": 1
217
+ },
218
+ {
219
+ "type": "text",
220
+ "text": "Event skeleton generation requires a deep understanding of events and their multi-dimensional relations. Previous methods employ autoregressive graph generation models to generate a schema, sequentially generating event nodes from the previous ones. For example, Li et al. (2021) generate the event node with its potential arguments and propagates edge-aware information within the temporal orders. Jin et al. (2022) improves this approach by applying a Graph Convolutional Network (GCN) to better capture structural information in instance graphs and adopting a similar autoregressive generation approach to generate event graphs. However, autoregressive generation methods for event skeleton generation result in errors accumulating over time, which may degrade the performance of the generation model. For instance, as shown in Figure 1, the model may mistakenly generate \"Explode\" as \"Die\", causing it to fail to generate subsequent events correctly. Intuitively, as the number of event nodes increases, the error accumulation becomes more severe. This comes from two factors. The first one is error propagation in the autoregressive graph generation models because they are noise-sensitive and strongly rely on the correctness of the generated node. If the model generates an incorrect node, it will lead to a cascading effect of errors in generating the schema. Robustness is a serious issue in autoregressive methods. The second factor is the model's inability to correct errors in the generation procedure. Hence, we need a model, which can correct the generated event-type nodes during generating.",
221
+ "bbox": [
222
+ 115,
223
+ 149,
224
+ 489,
225
+ 678
226
+ ],
227
+ "page_idx": 1
228
+ },
229
+ {
230
+ "type": "text",
231
+ "text": "To this end, we propose a novel event graph generation model, dubbed Diffusion Event Graph Model (DEGM), to address these issues. To battle the model's robustness, we propose a diffusion-based method, inspired by the outstanding performance in recent research (Sun et al., 2022; Xiao et al., 2022). By carefully selecting the amount of Gaussian noise in the diffusion process, the model can remove adversarial perturbations, thereby increasing the model's robustness. However, there are still two challenges in applying this method directly to the event graph: (1) mapping the discrete",
232
+ "bbox": [
233
+ 112,
234
+ 680,
235
+ 489,
236
+ 873
237
+ ],
238
+ "page_idx": 1
239
+ },
240
+ {
241
+ "type": "text",
242
+ "text": "graph structures and event types to a continuous space, and (2) finding a way to recover the event graph from the continuous space. We then develop the denoising stage, including converting the event graph into a sequence and applying an embedding technique to project it to the continuous space. Additionally, we introduce a custom edge-based loss function to capture the missing structural information during the transformation. To tackle the second challenge, we develop a rounding technique to predict the event types based on their representation and a pre-trained classifier to predict the event edges. To address the second issue, we derive the final schema, which guarantees error correction, by iteratively refining the latent representation.",
243
+ "bbox": [
244
+ 507,
245
+ 84,
246
+ 884,
247
+ 325
248
+ ],
249
+ "page_idx": 1
250
+ },
251
+ {
252
+ "type": "text",
253
+ "text": "We summarize our contributions as follows:",
254
+ "bbox": [
255
+ 527,
256
+ 326,
257
+ 855,
258
+ 341
259
+ ],
260
+ "page_idx": 1
261
+ },
262
+ {
263
+ "type": "list",
264
+ "sub_type": "text",
265
+ "list_items": [
266
+ "- We propose a novel Diffusion Event Graph model (DEGM) for event skeleton generation, in which a denoising training stage guarantees the model's robustness and the schema generation process fulfills error correction via iterative refinement on the latent representation.",
267
+ "- We are the first to tackle event skeleton generation via diffusion models, where we convert an event graph from discrete nodes to latent variables in a continuous space and train the model parameters by optimizing the event sequence reconstruction and graph structure reconstruction simultaneously.",
268
+ "- Experimental results on the event skeleton generation task demonstrate that our approach achieves better results than state-of-the-art baselines."
269
+ ],
270
+ "bbox": [
271
+ 531,
272
+ 343,
273
+ 884,
274
+ 631
275
+ ],
276
+ "page_idx": 1
277
+ },
278
+ {
279
+ "type": "text",
280
+ "text": "2 Preliminaries and Problem Statement",
281
+ "text_level": 1,
282
+ "bbox": [
283
+ 507,
284
+ 645,
285
+ 870,
286
+ 662
287
+ ],
288
+ "page_idx": 1
289
+ },
290
+ {
291
+ "type": "text",
292
+ "text": "2.1 Diffusion Models in a Continuous Space",
293
+ "text_level": 1,
294
+ "bbox": [
295
+ 507,
296
+ 671,
297
+ 873,
298
+ 688
299
+ ],
300
+ "page_idx": 1
301
+ },
302
+ {
303
+ "type": "text",
304
+ "text": "A diffusion model typically consists of forward and reverse processes. Given data $\\mathbf{x}_0\\in \\mathbb{R}^d$ , the forward process gradually adds noise to $\\mathbf{x}_0$ to obtain a sequence of latent variables in $\\mathbb{R}^d$ $\\mathbf{x}_1,\\dots ,\\mathbf{x}_T$ where $\\mathbf{x}_T$ is a Gaussian noise. Formally, the forward process can be attained by $q(\\mathbf{x}_t\\mid \\mathbf{x}_{t - 1}) =$ $\\mathcal{N}\\left(\\mathbf{x}_t;\\sqrt{1 - \\beta_t}\\mathbf{x}_{t - 1},\\beta_t\\mathbf{I}\\right)$ , where $\\beta_{t}$ controls the noise level at the $t$ -th step. Denote $\\alpha_{t} = 1 - \\beta_{t}$ and $\\overline{\\alpha}_t = \\sum_{s = 1}^t\\alpha_s$ , we can directly obtain $\\mathbf{x}_t$ as $q\\left(\\mathbf{x}_t\\mid \\mathbf{x}_0\\right) = \\mathcal{N}\\left(\\sqrt{\\overline{\\alpha}_t}\\mathbf{x}_0,1 - \\overline{\\alpha}_t\\mathbf{I}\\right)$ . After the forward process is completed, the reverse denoising process can be formulated as $p_\\theta (\\mathbf{x}_{t - 1}\\mid \\mathbf{x}_t) =$ $\\mathcal{N}(\\mathbf{x}_{t - 1};\\mu_{\\theta}(\\mathbf{x}_{t},t),\\Sigma_{\\theta}(\\mathbf{x}_{t},t))$ where $\\mu_{\\theta}(\\cdot)$ and $\\Sigma_{\\theta}(\\cdot)$ can be implemented using a neural network.",
305
+ "bbox": [
306
+ 505,
307
+ 694,
308
+ 884,
309
+ 919
310
+ ],
311
+ "page_idx": 1
312
+ },
313
+ {
314
+ "type": "page_footnote",
315
+ "text": "<sup>1</sup>For simplicity, we mention \"schema\" as \"event schema skeleton graph\", \"instance graph\" as \"event instance skeleton graph\", and \"event graph\" represents both.",
316
+ "bbox": [
317
+ 112,
318
+ 879,
319
+ 487,
320
+ 919
321
+ ],
322
+ "page_idx": 1
323
+ },
324
+ {
325
+ "type": "page_number",
326
+ "text": "12631",
327
+ "bbox": [
328
+ 477,
329
+ 927,
330
+ 522,
331
+ 940
332
+ ],
333
+ "page_idx": 1
334
+ },
335
+ {
336
+ "type": "text",
337
+ "text": "2.2 Diffusion Models in a Discrete Space",
338
+ "text_level": 1,
339
+ "bbox": [
340
+ 112,
341
+ 84,
342
+ 450,
343
+ 99
344
+ ],
345
+ "page_idx": 2
346
+ },
347
+ {
348
+ "type": "text",
349
+ "text": "For discrete data, e.g., text, Li et al. (2022) employ embedding and rounding techniques to map the text to a continuous space, which can also be recovered.",
350
+ "bbox": [
351
+ 112,
352
+ 105,
353
+ 487,
354
+ 153
355
+ ],
356
+ "page_idx": 2
357
+ },
358
+ {
359
+ "type": "text",
360
+ "text": "Given the embedding of the text $\\mathbf{w}$ , $\\mathrm{EMB}(\\mathbf{w})$ , and suppose $\\mathbf{x}_0$ is computed as $q(\\mathbf{x}_0|\\mathbf{w}) = \\mathcal{N}(\\mathbf{x}_0; \\mathbf{w}, \\beta_0\\mathbf{I})$ , the corresponding training objective is",
361
+ "bbox": [
362
+ 112,
363
+ 154,
364
+ 487,
365
+ 216
366
+ ],
367
+ "page_idx": 2
368
+ },
369
+ {
370
+ "type": "equation",
371
+ "text": "\n$$\n\\begin{array}{l} \\mathcal {L} _ {\\mathbf {x} _ {0}: \\text {s i m p l e}} ^ {\\mathrm {e 2 e}} (\\mathbf {w}) = \\underset {q _ {\\phi} (\\mathbf {x} _ {0: T} | \\mathbf {w})} {\\mathbb {E}} \\left[ \\sum_ {t = 2} ^ {T} [ \\| \\mathbf {x} _ {0} - f _ {\\theta} (\\mathbf {x} _ {t}, t) \\| ^ {2} ] \\right] + \\\\ \\underset {q _ {\\phi} \\left(\\mathbf {x} _ {0: 1} \\mid \\mathbf {w}\\right)} {\\mathbb {E}} \\left[ \\left| \\left| \\operatorname {E M B} (\\mathbf {w}) - f _ {\\theta} \\left(\\mathbf {x} _ {1}, 1\\right) \\right| \\right| ^ {2} - \\log p _ {\\theta} \\left(\\mathbf {w} \\mid \\mathbf {x} _ {0}\\right) \\right]. \\tag {1} \\\\ \\end{array}\n$$\n",
372
+ "text_format": "latex",
373
+ "bbox": [
374
+ 124,
375
+ 225,
376
+ 487,
377
+ 299
378
+ ],
379
+ "page_idx": 2
380
+ },
381
+ {
382
+ "type": "text",
383
+ "text": "The first expectation is to train the predicted model $f_{\\theta}(\\mathbf{x}_t, t)$ to fit $\\mathbf{x}_0$ from 2 to $T$ . Empirically, it can effectively reduce rounding errors (Li et al., 2022). The second expectation consists of two terms: the first item makes the predicted $\\mathbf{x}_0$ , i.e., $f_{\\theta}(\\mathbf{x}_1, 1)$ , closer to the embedding $\\mathrm{EMB}(\\mathbf{w})$ while the second item aims to correctly round $\\mathbf{x}_0$ to the text $\\mathbf{w}$ .",
384
+ "bbox": [
385
+ 112,
386
+ 300,
387
+ 487,
388
+ 428
389
+ ],
390
+ "page_idx": 2
391
+ },
392
+ {
393
+ "type": "text",
394
+ "text": "2.3 Problem Statement",
395
+ "text_level": 1,
396
+ "bbox": [
397
+ 112,
398
+ 439,
399
+ 312,
400
+ 454
401
+ ],
402
+ "page_idx": 2
403
+ },
404
+ {
405
+ "type": "text",
406
+ "text": "Event skeleton generation is a subtask of temporal complex event schema induction (Li et al., 2021). It aims to automatically induce a schema from instance graphs for a given complex event type, where a complex event type encompasses multiple complex events; see an example of car-bombing shown in Fig. 1. An event schema skeleton consists of nodes for atomic event types and edges for their temporal relations. Since event skeleton generation is a prerequisite yet challenging task in the temporal complex event schema induction task, we focus on this task in our work.",
407
+ "bbox": [
408
+ 112,
409
+ 461,
410
+ 487,
411
+ 652
412
+ ],
413
+ "page_idx": 2
414
+ },
415
+ {
416
+ "type": "text",
417
+ "text": "Formally, let $G = (\\mathcal{N}, \\mathcal{E})$ be an instance graph with $N = |\\mathcal{N}|$ nodes in $\\mathcal{N}$ and $\\mathcal{E}$ be the set of directed edges, one can obtain the corresponding adjacency matrix, $\\mathbf{A} = \\{a_{ij}\\} \\in \\{0,1\\}^{N \\times N}$ , where $a_{ij} = 1$ if $edge(i,j) \\in \\mathcal{E}$ and $a_{ij} = 0$ otherwise. Due to temporal relations, $G$ is a directed acyclic graph (DAG), and $\\mathbf{A}$ is an upper triangular matrix. Each node $n \\in \\mathcal{N}$ represents an atomic event and is assigned with an event type $n_e \\in \\Phi$ , where $\\Phi$ denotes the set of event types. The type of each atomic event is abstracted by the DARPA KAIROS ontology based on its event mention. In practice, we extract a set of instance graphs $\\mathcal{G}$ as outlined in Sec. 4.1 from news articles, where each instance graph $G \\in \\mathcal{G}$ describes a complex event,",
418
+ "bbox": [
419
+ 112,
420
+ 653,
421
+ 489,
422
+ 896
423
+ ],
424
+ "page_idx": 2
425
+ },
426
+ {
427
+ "type": "text",
428
+ "text": "e.g.,Kabul ambulance bombing as shown Fig.1. Given an instance graph set $\\mathcal{G} = \\{G_1,G_2,\\dots \\}$ our goal is to generate a schema $S$ that outlines the underlying evolution pattern of complex events under the given complex event type.",
429
+ "bbox": [
430
+ 507,
431
+ 84,
432
+ 882,
433
+ 165
434
+ ],
435
+ "page_idx": 2
436
+ },
437
+ {
438
+ "type": "text",
439
+ "text": "3 Method",
440
+ "text_level": 1,
441
+ "bbox": [
442
+ 509,
443
+ 175,
444
+ 611,
445
+ 191
446
+ ],
447
+ "page_idx": 2
448
+ },
449
+ {
450
+ "type": "text",
451
+ "text": "We propose Diffusion Event Graph Model (DEGM) to tackle the event skeleton generation task. Our DEGM is capable of generating temporal event graphs from random noise. Fig. 2 illustrates an overview of our DEGM.",
452
+ "bbox": [
453
+ 507,
454
+ 200,
455
+ 882,
456
+ 281
457
+ ],
458
+ "page_idx": 2
459
+ },
460
+ {
461
+ "type": "text",
462
+ "text": "3.1 Denoising Training",
463
+ "text_level": 1,
464
+ "bbox": [
465
+ 507,
466
+ 293,
467
+ 705,
468
+ 307
469
+ ],
470
+ "page_idx": 2
471
+ },
472
+ {
473
+ "type": "text",
474
+ "text": "The denoising training stage consists of three steps to reconstruct the event sequence and graph structure: 1) mapping the event graph into its embedding representation in a continuous space; 2) performing a forward step to obtain the latent variables, or representation with various levels of noise; 3) conducting the denoising step to remove the introduced noise from latent representation.",
475
+ "bbox": [
476
+ 507,
477
+ 313,
478
+ 882,
479
+ 441
480
+ ],
481
+ "page_idx": 2
482
+ },
483
+ {
484
+ "type": "text",
485
+ "text": "Embedding representation Given an instance graph $G$ , we first convert it into a sequence of $m$ events, $E = [e_1,e_2,\\dots ,e_m]$ , where $e_i$ denotes the event type of node $i$ , via topological sorting. We then project $E$ into its embedding representation in a continuous embedding space,",
486
+ "bbox": [
487
+ 507,
488
+ 451,
489
+ 880,
490
+ 546
491
+ ],
492
+ "page_idx": 2
493
+ },
494
+ {
495
+ "type": "equation",
496
+ "text": "\n$$\n\\mathbf {e} = \\left[ \\mathrm {E M B} _ {e} \\left(e _ {1}\\right), \\dots , \\mathrm {E M B} _ {e} \\left(e _ {m}\\right) \\right] \\in \\mathbb {R} ^ {d \\times m}, \\tag {2}\n$$\n",
497
+ "text_format": "latex",
498
+ "bbox": [
499
+ 519,
500
+ 558,
501
+ 880,
502
+ 577
503
+ ],
504
+ "page_idx": 2
505
+ },
506
+ {
507
+ "type": "text",
508
+ "text": "where $d$ is the representation size. Note that $m$ is a preset number of nodes to ensure all graphs are well-aligned. For graphs with less than $m$ nodes, we pad them by a pre-defined event type: PAD, which makes the total number of event types, $M = |\\Phi| + 1$ .",
509
+ "bbox": [
510
+ 507,
511
+ 590,
512
+ 880,
513
+ 686
514
+ ],
515
+ "page_idx": 2
516
+ },
517
+ {
518
+ "type": "text",
519
+ "text": "Forward Step After obtaining the embedded event sequence $\\mathbf{e}$ , we deliver the forward process in the diffusion framework to acquire a sequence of latent variables by monotonically increasing the level of introduced noise. We sample variables of $\\mathbf{x}_0$ and $\\mathbf{x}_t$ via",
520
+ "bbox": [
521
+ 507,
522
+ 695,
523
+ 880,
524
+ 790
525
+ ],
526
+ "page_idx": 2
527
+ },
528
+ {
529
+ "type": "equation",
530
+ "text": "\n$$\nq \\left(\\mathbf {x} _ {0} \\mid \\mathbf {e}\\right) = \\mathcal {N} \\left(\\mathbf {x} _ {0}; \\mathbf {e}, \\beta_ {0} \\mathbf {I}\\right), \\tag {3}\n$$\n",
531
+ "text_format": "latex",
532
+ "bbox": [
533
+ 557,
534
+ 804,
535
+ 880,
536
+ 821
537
+ ],
538
+ "page_idx": 2
539
+ },
540
+ {
541
+ "type": "equation",
542
+ "text": "\n$$\nq \\left(\\mathbf {x} _ {t} \\mid \\mathbf {x} _ {0}\\right) = \\mathcal {N} \\left(\\mathbf {x} _ {t}; \\sqrt {\\bar {\\alpha} _ {t}} \\mathbf {x} _ {0}, (1 - \\bar {\\alpha} _ {t}) \\mathbf {I}\\right), \\tag {4}\n$$\n",
543
+ "text_format": "latex",
544
+ "bbox": [
545
+ 551,
546
+ 825,
547
+ 880,
548
+ 841
549
+ ],
550
+ "page_idx": 2
551
+ },
552
+ {
553
+ "type": "text",
554
+ "text": "where $t = 1,\\dots ,T$ . Moreover, we introduce two additional embeddings to enhance the expressiveness of latent variables, i.e., the absolute position embedding $\\mathbf{W}_{pos}\\in \\mathbb{R}^{m\\times d}$ and the step embedding",
555
+ "bbox": [
556
+ 507,
557
+ 854,
558
+ 882,
559
+ 919
560
+ ],
561
+ "page_idx": 2
562
+ },
563
+ {
564
+ "type": "page_footnote",
565
+ "text": "2https://nlp.jhu.edu/schemas/",
566
+ "bbox": [
567
+ 134,
568
+ 903,
569
+ 356,
570
+ 917
571
+ ],
572
+ "page_idx": 2
573
+ },
574
+ {
575
+ "type": "page_number",
576
+ "text": "12632",
577
+ "bbox": [
578
+ 477,
579
+ 927,
580
+ 524,
581
+ 940
582
+ ],
583
+ "page_idx": 2
584
+ },
585
+ {
586
+ "type": "image",
587
+ "img_path": "images/7492c747df5c742689f0940c2f5de3c9458617301aef2607aeff26de1269532f.jpg",
588
+ "image_caption": [
589
+ "Figure 2: The procedure of training our DEGM. At the preprocessing step, an instance graph $G$ is converted into a temporal sequence of events e via topological sorting and the associated adjacency matrix $\\mathbf{A}$ , which represents the graph structure. Following that, we perform DEGM accordingly. We first convert the discrete events into their representation in a continuous space. The forward step and the denoising step are conducted iteratively to reconstruct the event sequence and the graph structure. Note that we convert the latent variable $\\mathbf{h}_{la}^{t}$ into three representations in two levels, i.e., the shared representation $\\mathbf{h}_{sh}^{t}$ and two task-specific representation for the node's type $\\mathbf{h}_{ty}^{t}$ and the node's structure $\\mathbf{h}_{st}^{t}$ , respectively; see more details in the text."
590
+ ],
591
+ "image_footnote": [],
592
+ "bbox": [
593
+ 137,
594
+ 82,
595
+ 863,
596
+ 296
597
+ ],
598
+ "page_idx": 3
599
+ },
600
+ {
601
+ "type": "text",
602
+ "text": "$\\mathrm{EMB}_s(t)$ . They allow us to capture the event's temporal order in the obtained event sequence and specify that it is at the $t$ -th diffusion step. Adding them together, we obtain the latent variables at $t$ -th diffusion step as",
603
+ "bbox": [
604
+ 112,
605
+ 431,
606
+ 487,
607
+ 511
608
+ ],
609
+ "page_idx": 3
610
+ },
611
+ {
612
+ "type": "equation",
613
+ "text": "\n$$\n\\mathbf {h} _ {l a} ^ {t} = \\mathbf {x} _ {t} + \\mathbf {W} _ {p o s} + \\operatorname {E M B} _ {s} (t). \\tag {5}\n$$\n",
614
+ "text_format": "latex",
615
+ "bbox": [
616
+ 181,
617
+ 517,
618
+ 487,
619
+ 536
620
+ ],
621
+ "page_idx": 3
622
+ },
623
+ {
624
+ "type": "text",
625
+ "text": "Denoising Step Before optimizing the two objectives, event sequence reconstruction and graph structure reconstruction, we first convert the latent variable $\\mathbf{h}_{la}^{t}$ into three variables in two levels, i.e., via a shared encoder $\\mathrm{E}_{sh}$ to $\\mathbf{h}_{sh}^{t}$ and two task-specific encoders, the node's type encoder $\\mathrm{E}_{ty}$ to $\\mathbf{h}_{ty}^{t}$ and the node's structure encoder $\\mathrm{E}_{st}$ to $\\mathbf{h}_{st}^{t}$ . That is,",
626
+ "bbox": [
627
+ 112,
628
+ 541,
629
+ 489,
630
+ 668
631
+ ],
632
+ "page_idx": 3
633
+ },
634
+ {
635
+ "type": "equation",
636
+ "text": "\n$$\n\\mathbf {h} _ {s h} ^ {t} = \\operatorname {E} _ {s h} \\left(\\mathbf {h} _ {l a} ^ {t}\\right), \\tag {6}\n$$\n",
637
+ "text_format": "latex",
638
+ "bbox": [
639
+ 233,
640
+ 675,
641
+ 487,
642
+ 693
643
+ ],
644
+ "page_idx": 3
645
+ },
646
+ {
647
+ "type": "equation",
648
+ "text": "\n$$\n\\mathbf {h} _ {t y} ^ {t} = \\operatorname {E} _ {t y} \\left(\\mathbf {h} _ {s h} ^ {t}\\right), \\tag {7}\n$$\n",
649
+ "text_format": "latex",
650
+ "bbox": [
651
+ 238,
652
+ 695,
653
+ 487,
654
+ 713
655
+ ],
656
+ "page_idx": 3
657
+ },
658
+ {
659
+ "type": "equation",
660
+ "text": "\n$$\n\\mathbf {h} _ {s t} ^ {t} = \\mathrm {E} _ {s t} \\left(\\mathbf {h} _ {s h} ^ {t}\\right). \\tag {8}\n$$\n",
661
+ "text_format": "latex",
662
+ "bbox": [
663
+ 238,
664
+ 715,
665
+ 485,
666
+ 733
667
+ ],
668
+ "page_idx": 3
669
+ },
670
+ {
671
+ "type": "text",
672
+ "text": "In the following, we outline the procedure of constructing encoders $\\mathrm{E}_{sh}$ , $\\mathrm{E}_{ty}$ , and $\\mathrm{E}_{st}$ , each contains $l$ layer. With a little abuse of notations, we define $\\mathbf{h} = [\\mathbf{h}_1, \\dots, \\mathbf{h}_m]$ as the input representation for a layer and the corresponding output as $\\mathbf{h}' = [\\mathbf{h}_1', \\dots, \\mathbf{h}_m']$ .",
673
+ "bbox": [
674
+ 112,
675
+ 740,
676
+ 487,
677
+ 835
678
+ ],
679
+ "page_idx": 3
680
+ },
681
+ {
682
+ "type": "text",
683
+ "text": "Here, we utilize the graph-attention (Velicković et al., 2018) to transform the input representation into a high-level representation as follows:",
684
+ "bbox": [
685
+ 112,
686
+ 835,
687
+ 487,
688
+ 883
689
+ ],
690
+ "page_idx": 3
691
+ },
692
+ {
693
+ "type": "equation",
694
+ "text": "\n$$\n\\mathbf {h} _ {i} ^ {\\prime} = \\sigma \\left(\\sum_ {j = 1} ^ {m} \\alpha_ {i j} \\mathbf {W h} _ {j}\\right), \\tag {9}\n$$\n",
695
+ "text_format": "latex",
696
+ "bbox": [
697
+ 210,
698
+ 881,
699
+ 487,
700
+ 921
701
+ ],
702
+ "page_idx": 3
703
+ },
704
+ {
705
+ "type": "text",
706
+ "text": "where $\\mathbf{W} \\in \\mathbb{R}^{d \\times d}$ is a weight matrix, $\\sigma(\\cdot)$ is a nonlinear activation function. Here, $\\alpha_{ij}$ is the attention weight defined by",
707
+ "bbox": [
708
+ 507,
709
+ 430,
710
+ 882,
711
+ 479
712
+ ],
713
+ "page_idx": 3
714
+ },
715
+ {
716
+ "type": "equation",
717
+ "text": "\n$$\n\\alpha_ {i j} = \\frac {\\exp \\left(\\mathrm {L R} \\left(\\mathbf {a} ^ {T} \\left[ \\mathbf {W h} _ {i} \\| \\mathbf {W h} _ {j} \\right]\\right)\\right)}{\\sum_ {k = 1} ^ {m} \\exp \\left(\\mathrm {L R} \\left(\\mathbf {a} ^ {T} \\left[ \\mathbf {W h} _ {i} \\| \\mathbf {W h} _ {k} \\right]\\right)\\right)}, \\tag {10}\n$$\n",
718
+ "text_format": "latex",
719
+ "bbox": [
720
+ 564,
721
+ 483,
722
+ 882,
723
+ 529
724
+ ],
725
+ "page_idx": 3
726
+ },
727
+ {
728
+ "type": "text",
729
+ "text": "where $\\mathbf{a} \\in \\mathbb{R}^{2d}$ is a weight vector, LR is the LeakyReLU activation function, and $||$ denotes the concatenation operation. We compute attention weights in this way instead of relying on the inner product to prevent higher attention weights between atomic events of the same event type $^3$ , which is not appropriate for constructing the event graph. For instance, the attention weight between two independent Attack events should be less than the weight of one Attack and its successor events.",
730
+ "bbox": [
731
+ 507,
732
+ 537,
733
+ 882,
734
+ 697
735
+ ],
736
+ "page_idx": 3
737
+ },
738
+ {
739
+ "type": "text",
740
+ "text": "After attaining $\\mathbf{h}_{ty}^{t},\\mathbf{h}_{st}^{t}$ , via $\\mathrm{E}_{ty}$ and $\\mathrm{E}_{st}$ , respectively, we compute two losses, the event sequence reconstruction loss $\\mathcal{L}_{ty}^{t}(G)$ and the graph structure reconstruction loss $\\mathcal{L}_{st}^{t}(G)$ at the $t$ -th diffusion step as:",
741
+ "bbox": [
742
+ 507,
743
+ 697,
744
+ 882,
745
+ 778
746
+ ],
747
+ "page_idx": 3
748
+ },
749
+ {
750
+ "type": "equation",
751
+ "text": "\n$$\n\\mathcal {L} _ {t y} ^ {t} (G) = \\text {C r o s s E n t r o p y} \\left(\\mathbf {h} _ {t y} ^ {t}, \\mathbf {W} _ {e} ^ {T}, E\\right), \\tag {11}\n$$\n",
752
+ "text_format": "latex",
753
+ "bbox": [
754
+ 512,
755
+ 787,
756
+ 882,
757
+ 804
758
+ ],
759
+ "page_idx": 3
760
+ },
761
+ {
762
+ "type": "equation",
763
+ "text": "\n$$\n\\mathcal {L} _ {s t} ^ {t} (G) = \\frac {2}{(m - 1) ^ {2}} \\sum_ {i = 1} ^ {m - 1} \\sum_ {j = i + 1} ^ {m} (\\operatorname {M L P} \\left(\\mathbf {h} _ {s t i} ^ {t} \\| \\mathbf {h} _ {s t j} ^ {t}\\right) - a _ {i j}) ^ {2}. \\tag {12}\n$$\n",
764
+ "text_format": "latex",
765
+ "bbox": [
766
+ 514,
767
+ 807,
768
+ 882,
769
+ 835
770
+ ],
771
+ "page_idx": 3
772
+ },
773
+ {
774
+ "type": "text",
775
+ "text": "The objective of $\\mathcal{L}_{ty}^t (G)$ in Eq. (11) is to reduce the difference between the ground truth $E$ and",
776
+ "bbox": [
777
+ 507,
778
+ 841,
779
+ 882,
780
+ 873
781
+ ],
782
+ "page_idx": 3
783
+ },
784
+ {
785
+ "type": "page_footnote",
786
+ "text": "<sup>3</sup>Wu et al. (2022) observe that using the inner product to calculate attention weights results in higher weights between nodes of the same type.",
787
+ "bbox": [
788
+ 507,
789
+ 879,
790
+ 882,
791
+ 917
792
+ ],
793
+ "page_idx": 3
794
+ },
795
+ {
796
+ "type": "page_number",
797
+ "text": "12633",
798
+ "bbox": [
799
+ 477,
800
+ 927,
801
+ 524,
802
+ 940
803
+ ],
804
+ "page_idx": 3
805
+ },
806
+ {
807
+ "type": "text",
808
+ "text": "$\\mathbf{h}_{ty}^{t}\\mathbf{W}_{e}^{T}\\in \\mathbb{R}^{m\\times M}$ , which represents the probabilities of each node belonging to each event type. It is worth noting that $\\mathcal{L}_{ty}^{t}(G)$ offers a simplified version of the training objective outlined in Eq. (1), and empirically improves the quality of the generated schemas. Meanwhile, the objective of $\\mathcal{L}_{st}^{t}(G)$ in Eq. (12) aims to predict the probability of a directed edge from node $i$ to node $j$ and fit their adjacency matrix value $a_{ij}\\in \\mathbf{A}$ . Finally, we obtain the model by minimizing the following loss:",
809
+ "bbox": [
810
+ 112,
811
+ 82,
812
+ 489,
813
+ 244
814
+ ],
815
+ "page_idx": 4
816
+ },
817
+ {
818
+ "type": "equation",
819
+ "text": "\n$$\n\\mathcal {L} = \\sum_ {G \\in \\mathcal {G}} \\sum_ {t = 1} ^ {T} \\mathcal {L} _ {t y} ^ {t} (G) + \\lambda \\mathcal {L} _ {s t} ^ {t} (G), \\tag {13}\n$$\n",
820
+ "text_format": "latex",
821
+ "bbox": [
822
+ 179,
823
+ 247,
824
+ 487,
825
+ 281
826
+ ],
827
+ "page_idx": 4
828
+ },
829
+ {
830
+ "type": "text",
831
+ "text": "where $T$ denotes the total diffusion steps and $\\lambda$ is a constant to balance the two objectives. When training our model, we randomly select a few instance graphs and then sample a diffusion step $t$ for each of these graphs. We then minimize Eq. (13) to update the model's weights until it converges.",
832
+ "bbox": [
833
+ 112,
834
+ 293,
835
+ 489,
836
+ 391
837
+ ],
838
+ "page_idx": 4
839
+ },
840
+ {
841
+ "type": "text",
842
+ "text": "3.2 Schema Generation",
843
+ "text_level": 1,
844
+ "bbox": [
845
+ 112,
846
+ 401,
847
+ 315,
848
+ 416
849
+ ],
850
+ "page_idx": 4
851
+ },
852
+ {
853
+ "type": "text",
854
+ "text": "We start the schema generation procedure from $\\tilde{\\mathbf{h}}_{la}^{T}\\in \\mathbb{R}^{m\\times d}$ , which are sampled from Gaussian noise. We then compute its shared representation $\\tilde{\\mathbf{h}}_{sh}^{t}$ and the node type representation $\\tilde{\\mathbf{h}}_{ty}^{t}$ at the $t$ -th diffusion step reversely:",
855
+ "bbox": [
856
+ 112,
857
+ 422,
858
+ 487,
859
+ 502
860
+ ],
861
+ "page_idx": 4
862
+ },
863
+ {
864
+ "type": "equation",
865
+ "text": "\n$$\n\\tilde {\\mathbf {h}} _ {s h} ^ {t} = \\mathrm {E} _ {s h} \\left(\\tilde {\\mathbf {h}} _ {l a} ^ {t} + \\mathbf {W} _ {p o s} + \\mathbf {E M B} _ {s} (t)\\right), \\tag {14}\n$$\n",
866
+ "text_format": "latex",
867
+ "bbox": [
868
+ 122,
869
+ 514,
870
+ 487,
871
+ 533
872
+ ],
873
+ "page_idx": 4
874
+ },
875
+ {
876
+ "type": "equation",
877
+ "text": "\n$$\n\\tilde {\\mathbf {h}} _ {t y} ^ {t} = \\mathrm {E} _ {t y} \\left(\\tilde {\\mathbf {h}} _ {s h} ^ {t}\\right), \\tilde {\\mathbf {h}} _ {l a} ^ {t - 1} = \\tilde {\\mathbf {h}} _ {t y} ^ {t}, t = T, \\dots , 1. \\tag {15}\n$$\n",
878
+ "text_format": "latex",
879
+ "bbox": [
880
+ 124,
881
+ 535,
882
+ 485,
883
+ 556
884
+ ],
885
+ "page_idx": 4
886
+ },
887
+ {
888
+ "type": "text",
889
+ "text": "After $T$ denoising steps, we obtain the final representation $\\tilde{\\mathbf{h}}_{sh}^{0}$ , $\\tilde{\\mathbf{h}}_{ty}^{0}$ , and compute $\\tilde{\\mathbf{h}}_{st}^{0} = \\mathrm{E}_{st}(\\tilde{\\mathbf{h}}_{sh}^{0})$ .",
890
+ "bbox": [
891
+ 112,
892
+ 568,
893
+ 487,
894
+ 601
895
+ ],
896
+ "page_idx": 4
897
+ },
898
+ {
899
+ "type": "text",
900
+ "text": "Next, we apply the node type representation $\\tilde{\\mathbf{h}}_{ty}^{0}$ and the structure representation $\\tilde{\\mathbf{h}}_{st}^{0}$ to generate the schema. First, with $\\tilde{\\mathbf{h}}_{ty}^{0} = [\\tilde{\\mathbf{h}}_{ty}^{1},\\dots,\\tilde{\\mathbf{h}}_{ty}^{m}]\\in \\mathbb{R}^{m\\times d}$ , we obtain each event's type $e_i\\in \\tilde{E}$ by assigning the event type whose embedding is nearest to $\\tilde{\\mathbf{h}}_{ty}^{i}$ as:",
901
+ "bbox": [
902
+ 112,
903
+ 602,
904
+ 487,
905
+ 701
906
+ ],
907
+ "page_idx": 4
908
+ },
909
+ {
910
+ "type": "equation",
911
+ "text": "\n$$\ne _ {i} = \\underset {e _ {j} \\in \\Phi} {\\arg \\min } \\left(\\| \\tilde {\\mathbf {h}} _ {t y} ^ {i} - \\mathrm {E M B} _ {e} \\left(e _ {j}\\right) \\|\\right). \\tag {16}\n$$\n",
912
+ "text_format": "latex",
913
+ "bbox": [
914
+ 149,
915
+ 714,
916
+ 487,
917
+ 744
918
+ ],
919
+ "page_idx": 4
920
+ },
921
+ {
922
+ "type": "text",
923
+ "text": "Second, with $\\tilde{\\mathbf{h}}_{st}^{0} = [\\tilde{\\mathbf{h}}_{st}^{1},\\dots ,\\tilde{\\mathbf{h}}_{st}^{m}]\\in \\mathbb{R}^{m\\times d}$ , we predict the directed edge from node $i$ to node $j$ where $i < j$ by using a pre-trained classifier MLP trained via Eq. (12) as follows:",
924
+ "bbox": [
925
+ 112,
926
+ 756,
927
+ 487,
928
+ 822
929
+ ],
930
+ "page_idx": 4
931
+ },
932
+ {
933
+ "type": "equation",
934
+ "text": "\n$$\n\\beta_ {i j} = \\left\\{ \\begin{array}{l} 1, \\operatorname {M L P} \\left(\\tilde {\\mathbf {h}} _ {s t} ^ {i} \\left\\| \\tilde {\\mathbf {h}} _ {s t} ^ {j}\\right)\\right) > \\tau \\\\ 0, \\text {o t h e r w i s e}, \\end{array} \\right., \\tag {17}\n$$\n",
935
+ "text_format": "latex",
936
+ "bbox": [
937
+ 171,
938
+ 834,
939
+ 487,
940
+ 876
941
+ ],
942
+ "page_idx": 4
943
+ },
944
+ {
945
+ "type": "text",
946
+ "text": "where $\\tau$ is a threshold to determine the final edges and $\\beta_{ij} \\in \\tilde{\\mathbf{A}}$ is the adjacency matrix value of the",
947
+ "bbox": [
948
+ 112,
949
+ 887,
950
+ 487,
951
+ 920
952
+ ],
953
+ "page_idx": 4
954
+ },
955
+ {
956
+ "type": "text",
957
+ "text": "generated schema. We generate the schema from the reconstructed event sequence $\\tilde{E}$ and adjacency matrix $\\tilde{\\mathbf{A}}$ , and remove PAD type events and the edges associated with them and derive the final schema $S$ .",
958
+ "bbox": [
959
+ 507,
960
+ 84,
961
+ 882,
962
+ 162
963
+ ],
964
+ "page_idx": 4
965
+ },
966
+ {
967
+ "type": "text",
968
+ "text": "4 Experiments",
969
+ "text_level": 1,
970
+ "bbox": [
971
+ 507,
972
+ 175,
973
+ 655,
974
+ 191
975
+ ],
976
+ "page_idx": 4
977
+ },
978
+ {
979
+ "type": "text",
980
+ "text": "4.1 Datasets",
981
+ "text_level": 1,
982
+ "bbox": [
983
+ 507,
984
+ 200,
985
+ 621,
986
+ 214
987
+ ],
988
+ "page_idx": 4
989
+ },
990
+ {
991
+ "type": "text",
992
+ "text": "We conduct experiments to evaluate our model in three IED bombings datasets (Li et al., 2021; Jin et al., 2022). Each dataset associates with a distinct complex event type: General IED, Car bombing IED, and Suicide IED. Taking the complex event type Car bombing IED as an example, to construct the corresponding dataset, we need to build an instance graph set, where each instance graph describes a complex event, e.g., Kabul ambulance bombing. Li et al. (2021) first identify some complex events related to the complex event type based on Wikipedia. Then, each instance graph is constructed from the reference news articles in Wikipedia pages related to the complex event. Specifically, Li et al. (2021) utilized the state-of-the-art information extraction system RESIN (Wen et al., 2021) to extract atomic events, represented as event types, and their temporal relations from news articles, and finally obtained the instance graph set. Next, a human curation is performed to ensure the soundness of the instance graphs (Jin et al., 2022). We utilize the released curated datasets for our experiments and follow previous work (Jin et al., 2022) to split the data into train, validation, and test sets. The statistics of the three datasets are summarized in Table 1.",
993
+ "bbox": [
994
+ 507,
995
+ 221,
996
+ 884,
997
+ 639
998
+ ],
999
+ "page_idx": 4
1000
+ },
1001
+ {
1002
+ "type": "table",
1003
+ "img_path": "images/7d63fe2fbebfd022f5113641a193939e29af2efd64f1db326a1629dc8387be92.jpg",
1004
+ "table_caption": [],
1005
+ "table_footnote": [],
1006
+ "table_body": "<table><tr><td>Datasets</td><td>General-IED</td><td>Car-IED</td><td>Suicide-IED</td></tr><tr><td>train/val/test instance graphs</td><td>88/11/12</td><td>75/9/10</td><td>176/22/22</td></tr><tr><td>Avg e nodes/ee links per graph</td><td>90.8/212.6</td><td>146.5/345.7</td><td>117.4/245.2</td></tr></table>",
1007
+ "bbox": [
1008
+ 510,
1009
+ 646,
1010
+ 878,
1011
+ 688
1012
+ ],
1013
+ "page_idx": 4
1014
+ },
1015
+ {
1016
+ "type": "text",
1017
+ "text": "Table 1: The statistics for the three datasets. \"e\" and \"ee\" denote event and event-event, respectively.",
1018
+ "bbox": [
1019
+ 507,
1020
+ 697,
1021
+ 882,
1022
+ 726
1023
+ ],
1024
+ "page_idx": 4
1025
+ },
1026
+ {
1027
+ "type": "text",
1028
+ "text": "4.2 Baselines",
1029
+ "text_level": 1,
1030
+ "bbox": [
1031
+ 507,
1032
+ 753,
1033
+ 628,
1034
+ 766
1035
+ ],
1036
+ "page_idx": 4
1037
+ },
1038
+ {
1039
+ "type": "text",
1040
+ "text": "We compare our method with the following strong baselines:",
1041
+ "bbox": [
1042
+ 507,
1043
+ 774,
1044
+ 880,
1045
+ 804
1046
+ ],
1047
+ "page_idx": 4
1048
+ },
1049
+ {
1050
+ "type": "text",
1051
+ "text": "- Temporal Event Graph Model (TEGM) (Li et al., 2021): TEGM is based on an autoregressive method that step-by-step generates event and edges between newly generated event and existing events and subsequently uses greedy decoding to obtain the schema, starting from a specially predefined START event.",
1052
+ "bbox": [
1053
+ 531,
1054
+ 806,
1055
+ 882,
1056
+ 917
1057
+ ],
1058
+ "page_idx": 4
1059
+ },
1060
+ {
1061
+ "type": "page_number",
1062
+ "text": "12634",
1063
+ "bbox": [
1064
+ 477,
1065
+ 927,
1066
+ 524,
1067
+ 940
1068
+ ],
1069
+ "page_idx": 4
1070
+ },
1071
+ {
1072
+ "type": "list",
1073
+ "sub_type": "text",
1074
+ "list_items": [
1075
+ "- Frequency-Based Sampling (FBS) (Jin et al., 2022): FBS first counts the occurrence frequency of edges between two event types in the train set. Then the schema is constructed in which each node corresponds to one event type, and initially, the schema does not have any edges. After that, FBS samples one pair of event types based on the occurrence frequency of edges and adds an edge between the corresponding nodes into the schema. The process is repeated until the newly added edge resulting in a cycle in the schema.",
1076
+ "- DoubleGAE (Jin et al., 2022): DoubleGAE generates an event graph based on DVAE (Zhang et al., 2019). They first use a directed GCN encoder to obtain the mean and variance of the event graph's latent variables, and then according to the sampled latent variables to recover the event graph in an autoregressive paradigm, similar to TEGM. Finally, they obtain the schema by feeding the hidden variables sampled from Gaussian noise into the model."
1077
+ ],
1078
+ "bbox": [
1079
+ 136,
1080
+ 84,
1081
+ 489,
1082
+ 454
1083
+ ],
1084
+ "page_idx": 5
1085
+ },
1086
+ {
1087
+ "type": "text",
1088
+ "text": "4.3 Experimental Setup",
1089
+ "text_level": 1,
1090
+ "bbox": [
1091
+ 112,
1092
+ 472,
1093
+ 317,
1094
+ 488
1095
+ ],
1096
+ "page_idx": 5
1097
+ },
1098
+ {
1099
+ "type": "text",
1100
+ "text": "Quantitative metrics. We train our model in the train set for a given dataset and then generate the schema according to Sec. 3.2. To evaluate the quality of the schema, we compare the schema with the instance graphs in the test set using the following metrics:",
1101
+ "bbox": [
1102
+ 112,
1103
+ 495,
1104
+ 487,
1105
+ 590
1106
+ ],
1107
+ "page_idx": 5
1108
+ },
1109
+ {
1110
+ "type": "list",
1111
+ "sub_type": "text",
1112
+ "list_items": [
1113
+ "(1) Event type match. We compute the set of event types in the generated schema and the set for a test instance graph and compute the F1 score between the two sets to see whether our schema contains the event types in the real-word complex events.",
1114
+ "(2) Event sequence match. We compute the set of event sequences with a length 2 (or 3) in the generated schema, as well as the set for a test instance graph, and compute the F1 scores between the two sets to measure how the schema captures substructures in the test instance graphs."
1115
+ ],
1116
+ "bbox": [
1117
+ 112,
1118
+ 593,
1119
+ 487,
1120
+ 770
1121
+ ],
1122
+ "page_idx": 5
1123
+ },
1124
+ {
1125
+ "type": "text",
1126
+ "text": "Note that we calculate the average values of each metric above between the generated schema and each instance graph in the test set as the final results. We generate a set of candidate schemas and test their performance on the validation set, and select the best-performing one as the final schema for the focused complex event type.",
1127
+ "bbox": [
1128
+ 112,
1129
+ 772,
1130
+ 487,
1131
+ 884
1132
+ ],
1133
+ "page_idx": 5
1134
+ },
1135
+ {
1136
+ "type": "text",
1137
+ "text": "Implementation Details. For our DEGM, the representation dimension $d$ is 256. The number of",
1138
+ "bbox": [
1139
+ 112,
1140
+ 887,
1141
+ 487,
1142
+ 917
1143
+ ],
1144
+ "page_idx": 5
1145
+ },
1146
+ {
1147
+ "type": "text",
1148
+ "text": "encoder layers, $l$ , is set to 4. The graph structure reconstruction loss weight $\\lambda$ is 1, and the edge classification threshold $\\tau$ is 0.8. The learning rate is 1e-4 and the number of training epochs is 100. All hyperparameters are chosen based on the validation set. We select the best checkpoint, and the best-performing schema on the validation set according to the event type match (F1) metric. The maximum number of graph nodes $m$ is 50, and the number of our candidate schema is 500 following Jin et al. (2022). The event type in DARPA KAIROS ontology is 67. We define the noise schedule as $\\overline{\\alpha}_t = 1 - \\sqrt{t + 1 / T}$ following Li et al. (2022) and the total diffusion step $T$ is 100. All the experiments are conducted on Tesla A100 GPU with 40G memory.",
1149
+ "bbox": [
1150
+ 507,
1151
+ 84,
1152
+ 884,
1153
+ 341
1154
+ ],
1155
+ "page_idx": 5
1156
+ },
1157
+ {
1158
+ "type": "table",
1159
+ "img_path": "images/896f8394a6f01e255b7a9a1456ee7b0c261c79f9794753bd678ff2d7b7efa26c.jpg",
1160
+ "table_caption": [],
1161
+ "table_footnote": [],
1162
+ "table_body": "<table><tr><td rowspan=\"2\">Datasets</td><td rowspan=\"2\">Methods</td><td rowspan=\"2\">Event type match (F1)</td><td colspan=\"2\">Event seq match (F1)</td></tr><tr><td>l=2</td><td>l=3</td></tr><tr><td rowspan=\"5\">General-IED</td><td>TEGM</td><td>0.638</td><td>0.181</td><td>0.065</td></tr><tr><td>FBS</td><td>0.617</td><td>0.149</td><td>0.064</td></tr><tr><td>DoubleGAE</td><td>0.697</td><td>0.273</td><td>0.128</td></tr><tr><td>Ours avg</td><td>0.726±0.018</td><td>0.361±0.020</td><td>0.137±0.009</td></tr><tr><td>Ours</td><td>0.754±0.008</td><td>0.413±0.010</td><td>0.153±0.016</td></tr><tr><td rowspan=\"5\">Car-IED</td><td>TEGM</td><td>0.588</td><td>0.162</td><td>0.044</td></tr><tr><td>FBS</td><td>0.542</td><td>0.126</td><td>0.038</td></tr><tr><td>DoubleGAE</td><td>0.674</td><td>0.259</td><td>0.081</td></tr><tr><td>Ours avg</td><td>0.754±0.008</td><td>0.413±0.010</td><td>0.153±0.016</td></tr><tr><td>Ours</td><td>0.795±0.002</td><td>0.483±0.030</td><td>0.357±0.063</td></tr><tr><td rowspan=\"5\">Suicide-IED</td><td>TEGM</td><td>0.609</td><td>0.174</td><td>0.048</td></tr><tr><td>FBS</td><td>0.642</td><td>0.164</td><td>0.036</td></tr><tr><td>DoubleGAE</td><td>0.709</td><td>0.290</td><td>0.095</td></tr><tr><td>Ours avg</td><td>0.744±0.009</td><td>0.464±0.015</td><td>0.195±0.052</td></tr><tr><td>Ours</td><td>0.775±0.005</td><td>0.534±0.011</td><td>0.330±0.033</td></tr></table>",
1163
+ "bbox": [
1164
+ 512,
1165
+ 353,
1166
+ 878,
1167
+ 554
1168
+ ],
1169
+ "page_idx": 5
1170
+ },
1171
+ {
1172
+ "type": "text",
1173
+ "text": "Table 2: Results of all methods for the three datasets. Our results include the mean and variance under five different random seeds, while other methods' results are from previous work. The best results are in bold.",
1174
+ "bbox": [
1175
+ 507,
1176
+ 564,
1177
+ 882,
1178
+ 621
1179
+ ],
1180
+ "page_idx": 5
1181
+ },
1182
+ {
1183
+ "type": "text",
1184
+ "text": "4.4 Results and Analysis",
1185
+ "text_level": 1,
1186
+ "bbox": [
1187
+ 507,
1188
+ 640,
1189
+ 719,
1190
+ 656
1191
+ ],
1192
+ "page_idx": 5
1193
+ },
1194
+ {
1195
+ "type": "text",
1196
+ "text": "Table 2 reports the main results of our model and shows some notable observations: (1) Our model has achieved significant progress compared to the baselines across three datasets and three metrics; (2) The average performance of the generated candidate schemas also performs better than previous methods. The reasons for the first observation can be attributed to the ability of our model to iteratively refine the generated schema, enabling the node types and edges between nodes to better match the evolution pattern of the unseen complex events, resulting in superior performance on the test set. In contrast, Temporal Event Graph Model (TEGM) can only generate the next event based on the partially generated event graph during training and generation. DoubleGAE has",
1197
+ "bbox": [
1198
+ 505,
1199
+ 661,
1200
+ 882,
1201
+ 917
1202
+ ],
1203
+ "page_idx": 5
1204
+ },
1205
+ {
1206
+ "type": "page_number",
1207
+ "text": "12635",
1208
+ "bbox": [
1209
+ 477,
1210
+ 927,
1211
+ 524,
1212
+ 940
1213
+ ],
1214
+ "page_idx": 5
1215
+ },
1216
+ {
1217
+ "type": "image",
1218
+ "img_path": "images/a26af6790aec98a6b6d35e885d8eb5ddc32ab56bc3164188534abf6fee664cd9.jpg",
1219
+ "image_caption": [],
1220
+ "image_footnote": [],
1221
+ "bbox": [
1222
+ 272,
1223
+ 85,
1224
+ 737,
1225
+ 98
1226
+ ],
1227
+ "page_idx": 6
1228
+ },
1229
+ {
1230
+ "type": "image",
1231
+ "img_path": "images/3d4b63ccfba2c57a708cb7946e50437b6586e3e55d4fae13f3068cdd30afd9b3.jpg",
1232
+ "image_caption": [
1233
+ "Figure 3: To investigate the impact of topological sorting, we extend the train set by obtaining multiple (isomorphic graph number) isomorphic instance graphs sorted from one original train instance graph. We train and test our model on the extended dataset. All results are mean values under five different random seeds."
1234
+ ],
1235
+ "image_footnote": [],
1236
+ "bbox": [
1237
+ 119,
1238
+ 102,
1239
+ 378,
1240
+ 247
1241
+ ],
1242
+ "page_idx": 6
1243
+ },
1244
+ {
1245
+ "type": "image",
1246
+ "img_path": "images/72fadbdeced651c33c604c703d90fc1dcccfa7952defe03add01379613959719.jpg",
1247
+ "image_caption": [],
1248
+ "image_footnote": [],
1249
+ "bbox": [
1250
+ 388,
1251
+ 102,
1252
+ 626,
1253
+ 247
1254
+ ],
1255
+ "page_idx": 6
1256
+ },
1257
+ {
1258
+ "type": "image",
1259
+ "img_path": "images/c00f5ab60c339a67a28a980810bc004b65d45767d0b89077a7e6a61f135e9e83.jpg",
1260
+ "image_caption": [],
1261
+ "image_footnote": [],
1262
+ "bbox": [
1263
+ 638,
1264
+ 102,
1265
+ 878,
1266
+ 247
1267
+ ],
1268
+ "page_idx": 6
1269
+ },
1270
+ {
1271
+ "type": "image",
1272
+ "img_path": "images/c2562dbd8010208fce762ef42309ef773ef37829c7d4c06fbf1ea157e87704e4.jpg",
1273
+ "image_caption": [
1274
+ "Figure 4: We measure the impact of our simplified node type objective and a design choice which means we denoise the schema based on the structure representation. We find that both are crucial for improving the event type match (F1) metric."
1275
+ ],
1276
+ "image_footnote": [],
1277
+ "bbox": [
1278
+ 114,
1279
+ 326,
1280
+ 485,
1281
+ 476
1282
+ ],
1283
+ "page_idx": 6
1284
+ },
1285
+ {
1286
+ "type": "text",
1287
+ "text": "improved this problem by utilizing the encoder structure to capture the global structure of instance graphs. However, DoubleGAE still employs a similar generation procedure as TEGM during schema generation, resulting in a substantial performance gap with our method. Meanwhile, the performance of FBS is much lower than our method, indicating that the heuristic approach is challenging to generate such a schema, demonstrating the necessity for probabilistic modeling for the event graphs.",
1288
+ "bbox": [
1289
+ 112,
1290
+ 593,
1291
+ 489,
1292
+ 755
1293
+ ],
1294
+ "page_idx": 6
1295
+ },
1296
+ {
1297
+ "type": "text",
1298
+ "text": "For the second observation, we claim that our model is proficient in modeling the distribution of instance graphs. Also, selecting the best-performing schema based on the validation set helps immensely, especially for the event type match (F1) $(l = 3)$ metric. This may be because this metric is more sensitive to the gap between the truth distribution of instance graphs and the modeled distribution, and selecting schema based on the validation set reduces the gap.",
1299
+ "bbox": [
1300
+ 112,
1301
+ 758,
1302
+ 489,
1303
+ 919
1304
+ ],
1305
+ "page_idx": 6
1306
+ },
1307
+ {
1308
+ "type": "text",
1309
+ "text": "4.5 Ablation Studies",
1310
+ "text_level": 1,
1311
+ "bbox": [
1312
+ 507,
1313
+ 329,
1314
+ 685,
1315
+ 342
1316
+ ],
1317
+ "page_idx": 6
1318
+ },
1319
+ {
1320
+ "type": "text",
1321
+ "text": "We verify the importance of our simplified training objective and a design choice while generating the schema through two ablation studies. As shown in Figure 4, we can observe that our simplified training objective $\\mathcal{L}_{ty}^t (G)$ in Eq. 11 performs significantly better than the original one Eq. 1. This may be due to the fact that the original training objective includes three optimization objectives, while ours includes only one. And too many optimization objectives may lead to a larger loss variance, resulting in difficulty in convergence and thus degrading the performance. At the same time, both training objectives share the same goal: to maximize the model's ability to reconstruct the original event sequence at each diffusion step.",
1322
+ "bbox": [
1323
+ 505,
1324
+ 351,
1325
+ 884,
1326
+ 592
1327
+ ],
1328
+ "page_idx": 6
1329
+ },
1330
+ {
1331
+ "type": "text",
1332
+ "text": "Besides, we also investigate an alternative which we assign $\\mathbf{h}_{la}^{t-1}$ as $\\mathbf{h}_{st}^t$ in Eq. (15) while generating schema. We aim to explore whether it would be better to denoise based on the structure representation $\\mathbf{h}_{st}^t$ . However, this leads to a collapse of the event type match (F1) metric as in Figure 4. Probably due to the model is trained based on the embedded event sequence to reconstruct the event sequence and its graph structure. Therefore, the model prefers to denoise based on the node type representation $\\mathbf{h}_{ty}^t$ .",
1333
+ "bbox": [
1334
+ 507,
1335
+ 593,
1336
+ 882,
1337
+ 772
1338
+ ],
1339
+ "page_idx": 6
1340
+ },
1341
+ {
1342
+ "type": "text",
1343
+ "text": "4.6 Impact of Topological Sorting",
1344
+ "text_level": 1,
1345
+ "bbox": [
1346
+ 507,
1347
+ 784,
1348
+ 789,
1349
+ 800
1350
+ ],
1351
+ "page_idx": 6
1352
+ },
1353
+ {
1354
+ "type": "text",
1355
+ "text": "Our approach, as well as previous autoregressive graph generation methods, all require a topological sorting of the instance graph to obtain a sorted version of the graph that is not unique. Therefore, we want to investigate whether the model's performance is affected when we train our model with multiple isomorphic instance graphs randomly",
1356
+ "bbox": [
1357
+ 507,
1358
+ 806,
1359
+ 882,
1360
+ 919
1361
+ ],
1362
+ "page_idx": 6
1363
+ },
1364
+ {
1365
+ "type": "page_number",
1366
+ "text": "12636",
1367
+ "bbox": [
1368
+ 477,
1369
+ 927,
1370
+ 524,
1371
+ 940
1372
+ ],
1373
+ "page_idx": 6
1374
+ },
1375
+ {
1376
+ "type": "text",
1377
+ "text": "sorted from one instance graph. Getting $n$ randomly sorted instance graphs from one instance graph is equivalent to expanding the training set $n$ times. We test our model's performance respectively by setting the $n$ range from 1 to 9. As shown in Figure 3, however, we observe that training our model on the expanded training set hardly affects the model's performance across all three datasets and three metrics. Indicating that our model captures the evolution pattern of the instance graph based only on one sorted instance graph.",
1378
+ "bbox": [
1379
+ 112,
1380
+ 84,
1381
+ 492,
1382
+ 261
1383
+ ],
1384
+ "page_idx": 7
1385
+ },
1386
+ {
1387
+ "type": "text",
1388
+ "text": "4.7 Error Analysis and Case Study",
1389
+ "text_level": 1,
1390
+ "bbox": [
1391
+ 112,
1392
+ 273,
1393
+ 405,
1394
+ 288
1395
+ ],
1396
+ "page_idx": 7
1397
+ },
1398
+ {
1399
+ "type": "image",
1400
+ "img_path": "images/19bf6b6acd7eefbb114c81e27d7a48bd3c8b1d00d3db75dfed39af3bbdf6f9d8.jpg",
1401
+ "image_caption": [
1402
+ "Figure 5: A snippet of schema generated by DEGM."
1403
+ ],
1404
+ "image_footnote": [],
1405
+ "bbox": [
1406
+ 154,
1407
+ 305,
1408
+ 450,
1409
+ 430
1410
+ ],
1411
+ "page_idx": 7
1412
+ },
1413
+ {
1414
+ "type": "text",
1415
+ "text": "In Figure 5, we present a snippet of the schema generated by our model. From this, we can observe two phenomena: (1) The generated schema contains precise types of atomic events and the common substructures.(2) The model has a tendency to generate repeated subsequent events and substructures. The superior performance of our model is revealed by the first phenomenon, which demonstrates its ability to accurately generate both events and substructures. However, the second phenomenon highlights a drawback of the model, which is its tendency to produce duplicate substructures and events. Further analysis revealed that this repetitive structure is caused by a high number of repetitive substructures in the training set, due to the fact that the instance graphs used were extracted from news articles, which can be noisy. As a result, the model learns to replicate these patterns.",
1416
+ "bbox": [
1417
+ 112,
1418
+ 462,
1419
+ 489,
1420
+ 751
1421
+ ],
1422
+ "page_idx": 7
1423
+ },
1424
+ {
1425
+ "type": "text",
1426
+ "text": "5 Related Work",
1427
+ "text_level": 1,
1428
+ "bbox": [
1429
+ 112,
1430
+ 764,
1431
+ 270,
1432
+ 778
1433
+ ],
1434
+ "page_idx": 7
1435
+ },
1436
+ {
1437
+ "type": "text",
1438
+ "text": "According to Jin et al. (2022), event schema induction can be divided into three categories: (1) atomic event schema induction (Chambers, 2013; Cheung et al., 2013; Nguyen et al., 2015; Sha et al., 2016; Yuan et al., 2018) has focused on inducing an event template, called atomic event schema, for multiple similar atomic events. The template includes an abstracted event type and a set of entity roles",
1439
+ "bbox": [
1440
+ 112,
1441
+ 790,
1442
+ 489,
1443
+ 919
1444
+ ],
1445
+ "page_idx": 7
1446
+ },
1447
+ {
1448
+ "type": "text",
1449
+ "text": "shared by all atomic events, while ignoring the relations between events. (2) narrative event schema induction (Chambers and Jurafsky, 2008, 2009; Jans et al., 2012; Rudinger et al., 2015; Granroth-Wilding and Clark, 2016; Zhu et al., 2022; Gao et al., 2022a,b; Long et al., 2022; Yang et al., 2021), in contrast, pays attention to the relations between events. In this task, schema is defined as a narrative-ordered sequence of events, with each event including its entity roles. However, complex events in real-world scenarios often consists of multiple events and entities with innerwined relations.",
1450
+ "bbox": [
1451
+ 507,
1452
+ 84,
1453
+ 884,
1454
+ 275
1455
+ ],
1456
+ "page_idx": 7
1457
+ },
1458
+ {
1459
+ "type": "text",
1460
+ "text": "To under such complex events, Li et al. (2020) incorporate graph structure into schema definition. However, they only consider the relations between two events and their entities. (3) temporal complex event induction task, recently, Li et al. (2021) propose this task in which a schema consists of events, entities, the temporal relations between events, relations between entities, and relations between event and entity (i.e., argument). Each event and entity is abstracted as an event type or entity type, and each event type contains multiple predefined arguments associated with entities. To address this issue, Li et al. (2021) generates the schema event by event. Each time an event is generated, the model links it to existing events, expands it with predefined arguments and entities, and links the entities to existing nodes. This approach leads to the entities' inability to perceive the events' position, resulting in entities cannot distinguish between events of the same type. Therefore (Jin et al., 2022) divide the task into two stages: event skeleton generation and entity-entity relation completion. In the first stage, they employ an autoregressive directed graph generation method (Zhang et al., 2019) to generate the schema skeleton, including events and their relations. In the second stage, they expand the schema skeleton with predefined arguments and entities and complete the remaining relations via a link prediction method VGAE (Kipf and Welling, 2016).",
1461
+ "bbox": [
1462
+ 507,
1463
+ 300,
1464
+ 884,
1465
+ 766
1466
+ ],
1467
+ "page_idx": 7
1468
+ },
1469
+ {
1470
+ "type": "text",
1471
+ "text": "The above event graph induction methods suffer from error accumulation due to the limitations of the autoregressive schema generation paradigm. To address this issue, we propose DEGM which utilizes a denoising training process to enhance the model's robustness to errors and a schema generation process to continuously correct the errors in the generated schema.",
1472
+ "bbox": [
1473
+ 507,
1474
+ 790,
1475
+ 885,
1476
+ 917
1477
+ ],
1478
+ "page_idx": 7
1479
+ },
1480
+ {
1481
+ "type": "page_number",
1482
+ "text": "12637",
1483
+ "bbox": [
1484
+ 477,
1485
+ 927,
1486
+ 524,
1487
+ 940
1488
+ ],
1489
+ "page_idx": 7
1490
+ },
1491
+ {
1492
+ "type": "text",
1493
+ "text": "6 Conclusions",
1494
+ "text_level": 1,
1495
+ "bbox": [
1496
+ 112,
1497
+ 84,
1498
+ 253,
1499
+ 98
1500
+ ],
1501
+ "page_idx": 8
1502
+ },
1503
+ {
1504
+ "type": "text",
1505
+ "text": "We propose Diffusion Event Graph Model, the first workable diffusion model for event skeleton generation. A significant breakthrough is to convert the discrete nodes in event instance graphs into a continuous space via embedding and rounding techniques and a custom edge-based loss. The denoising training process improves model robustness. During the schema generation process, we iteratively correct the errors in the schema via latent representation refinement. Experimental results on the three IED bombing datasets demonstrate that our approach achieves better results than other state-of-the-art baselines.",
1506
+ "bbox": [
1507
+ 112,
1508
+ 109,
1509
+ 490,
1510
+ 317
1511
+ ],
1512
+ "page_idx": 8
1513
+ },
1514
+ {
1515
+ "type": "text",
1516
+ "text": "Limitations",
1517
+ "text_level": 1,
1518
+ "bbox": [
1519
+ 112,
1520
+ 329,
1521
+ 220,
1522
+ 344
1523
+ ],
1524
+ "page_idx": 8
1525
+ },
1526
+ {
1527
+ "type": "text",
1528
+ "text": "Our proposed DEGM for event skeleton generation still contains some limitations:",
1529
+ "bbox": [
1530
+ 112,
1531
+ 354,
1532
+ 487,
1533
+ 385
1534
+ ],
1535
+ "page_idx": 8
1536
+ },
1537
+ {
1538
+ "type": "list",
1539
+ "sub_type": "text",
1540
+ "list_items": [
1541
+ "- It only considers the problem of event skeleton generation, a subtask of temporal complex event schema induction. It is promising to explore the whole task, which includes entities and entity-event relations.",
1542
+ "- Perspective from errors found that our model suffers from a tendency to generate correct duplicate substructures."
1543
+ ],
1544
+ "bbox": [
1545
+ 136,
1546
+ 387,
1547
+ 485,
1548
+ 514
1549
+ ],
1550
+ "page_idx": 8
1551
+ },
1552
+ {
1553
+ "type": "text",
1554
+ "text": "Ethics Statement",
1555
+ "text_level": 1,
1556
+ "bbox": [
1557
+ 112,
1558
+ 526,
1559
+ 265,
1560
+ 541
1561
+ ],
1562
+ "page_idx": 8
1563
+ },
1564
+ {
1565
+ "type": "text",
1566
+ "text": "We follow the ACL Code of Ethics. In our work, there are no human subjects and informed consent is not applicable.",
1567
+ "bbox": [
1568
+ 112,
1569
+ 551,
1570
+ 489,
1571
+ 600
1572
+ ],
1573
+ "page_idx": 8
1574
+ },
1575
+ {
1576
+ "type": "text",
1577
+ "text": "7 Acknowledgments",
1578
+ "text_level": 1,
1579
+ "bbox": [
1580
+ 112,
1581
+ 611,
1582
+ 307,
1583
+ 627
1584
+ ],
1585
+ "page_idx": 8
1586
+ },
1587
+ {
1588
+ "type": "text",
1589
+ "text": "The work was fully supported by the IDEA Information and Super Computing Centre (ISCC) and was partially supported by the National Nature Science Foundation of China (No. 62006062, 62176076, 62201576), Natural Science Foundation of GuangDong 2023A1515012922, the Shenzhen Foundational Research Funding (JCYJ20220818102415032, JCYJ20200109113441941), the Major Key Project of PCL2021A06, Guangdong Provincial Key Labo-ratory of Novel Security Intelligence Technologies 2022B1212010005.",
1590
+ "bbox": [
1591
+ 112,
1592
+ 636,
1593
+ 489,
1594
+ 829
1595
+ ],
1596
+ "page_idx": 8
1597
+ },
1598
+ {
1599
+ "type": "text",
1600
+ "text": "References",
1601
+ "text_level": 1,
1602
+ "bbox": [
1603
+ 114,
1604
+ 854,
1605
+ 213,
1606
+ 871
1607
+ ],
1608
+ "page_idx": 8
1609
+ },
1610
+ {
1611
+ "type": "ref_text",
1612
+ "text": "Nathanael Chambers. 2013. Event schema induction with a probabilistic entity-driven model. In Proceedings of the 2013 Conference on Empirical Methods",
1613
+ "bbox": [
1614
+ 114,
1615
+ 877,
1616
+ 489,
1617
+ 917
1618
+ ],
1619
+ "page_idx": 8
1620
+ },
1621
+ {
1622
+ "type": "list",
1623
+ "sub_type": "ref_text",
1624
+ "list_items": [
1625
+ "in Natural Language Processing, pages 1797-1807, Seattle, Washington, USA. Association for Computational Linguistics.",
1626
+ "Nathanael Chambers and Dan Jurafsky. 2008. Unsupervised learning of narrative event chains. In Proceedings of ACL-08: HLT, pages 789-797, Columbus, Ohio. Association for Computational Linguistics.",
1627
+ "Nathanael Chambers and Dan Jurafsky. 2009. Unsupervised learning of narrative schemas and their participants. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 602-610, Suntec, Singapore. Association for Computational Linguistics.",
1628
+ "Jackie Chi Kit Cheung, Hoifung Poon, and Lucy Vanderwende. 2013. Probabilistic frame induction. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 837-846, Atlanta, Georgia. Association for Computational Linguistics.",
1629
+ "Jun Gao, Wei Wang, Changlong Yu, Huan Zhao, Wilfred Ng, and Ruifeng Xu. 2022a. Improving event representation via simultaneous weakly supervised contrastive learning and clustering. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 3036-3049, Dublin, Ireland. Association for Computational Linguistics.",
1630
+ "Jun Gao, Changlong Yu, Wei Wang, Huan Zhao, and Ruifeng Xu. 2022b. Mask-then-fill: A flexible and effective data augmentation framework for event extraction. In *Findings of the Association for Computational Linguistics: EMNLP* 2022, pages 4537–4544, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.",
1631
+ "Mark Granroth-Wilding and Stephen Clark. 2016. What happens next? event prediction using a compositional neural network model. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, February 12-17, 2016, Phoenix, Arizona, USA, pages 2727-2733. AAAI Press.",
1632
+ "Bram Jans, Steven Bethard, Ivan Vulic, and Marie Francine Moens. 2012. Skip n-grams and ranking functions for predicting script events. In Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics, pages 336-344, Avignon, France. Association for Computational Linguistics.",
1633
+ "Xiaomeng Jin, Manling Li, and Heng Ji. 2022. Event schema induction with double graph autoencoders. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2013-2025, Seattle, United States. Association for Computational Linguistics."
1634
+ ],
1635
+ "bbox": [
1636
+ 510,
1637
+ 85,
1638
+ 884,
1639
+ 917
1640
+ ],
1641
+ "page_idx": 8
1642
+ },
1643
+ {
1644
+ "type": "page_number",
1645
+ "text": "12638",
1646
+ "bbox": [
1647
+ 477,
1648
+ 927,
1649
+ 524,
1650
+ 940
1651
+ ],
1652
+ "page_idx": 8
1653
+ },
1654
+ {
1655
+ "type": "list",
1656
+ "sub_type": "ref_text",
1657
+ "list_items": [
1658
+ "Thomas N Kipf and Max Welling. 2016. Variational graph auto-encoders. arXiv preprint arXiv:1611.07308.",
1659
+ "Manling Li, Sha Li, Zhenhailong Wang, Lifu Huang, Kyunghyun Cho, Heng Ji, Jiawei Han, and Clare Voss. 2021. The future is not one-dimensional: Complex event schema induction by graph modeling for event prediction. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 5203-5215, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.",
1660
+ "Manling Li, Qi Zeng, Ying Lin, Kyunghyun Cho, Heng Ji, Jonathan May, Nathanael Chambers, and Clare Voss. 2020. Connecting the dots: Event graph schema induction with path language modeling. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 684-695, Online. Association for Computational Linguistics.",
1661
+ "Xiang Lisa Li, John Thickstun, Ishaan Gulrajani, Percy Liang, and Tatsunori B Hashimoto. 2022. Diffusion improves controllable text generation. arXiv preprint arXiv:2205.14217.",
1662
+ "Siquu Long, Feiqi Cao, Soyeon Caren Han, and Haiqin Yang. 2022. Vision-and-language pretrained models: A survey. In Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, IJCAI 2022, Vienna, Austria, 23-29 July 2022, pages 5530-5537. ijcai.org.",
1663
+ "Kiem-Hieu Nguyen, Xavier Tannier, Olivier Ferret, and Romaric Besançon. 2015. Generative event schema induction with entity disambiguation. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 188-197, Beijing, China. Association for Computational Linguistics.",
1664
+ "Rachel Rudinger, Pushpendre Rastogi, Francis Ferraro, and Benjamin Van Durme. 2015. Script induction as language modeling. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1681-1686, Lisbon, Portugal. Association for Computational Linguistics.",
1665
+ "Lei Sha, Sujian Li, Baobao Chang, and Zhifang Sui. 2016. Joint learning templates and slots for event schema induction. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 428-434, San Diego, California. Association for Computational Linguistics.",
1666
+ "Jiachen Sun, Weili Nie, Zhiding Yu, Z Morley Mao, and Chaowei Xiao. 2022. Pointdp: Diffusion-driven purification against adversarial attacks on 3d point cloud recognition. arXiv preprint arXiv:2208.09801."
1667
+ ],
1668
+ "bbox": [
1669
+ 115,
1670
+ 85,
1671
+ 485,
1672
+ 917
1673
+ ],
1674
+ "page_idx": 9
1675
+ },
1676
+ {
1677
+ "type": "list",
1678
+ "sub_type": "ref_text",
1679
+ "list_items": [
1680
+ "Petar Velicković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. 2018. Graph attention networks. In International Conference on Learning Representations.",
1681
+ "Haoyang Wen, Ying Lin, Tuan Lai, Xiaoman Pan, Sha Li, Xudong Lin, Ben Zhou, Manling Li, Haoyu Wang, Hongming Zhang, Xiaodong Yu, Alexander Dong, Zhenhailong Wang, Yi Fung, Piyush Mishra, Qing Lyu, Didac Suris, Brian Chen, Susan Windisch Brown, Martha Palmer, Chris Callison-Burch, Carl Vondrick, Jiawei Han, Dan Roth, Shih-Fu Chang, and Heng Ji. 2021. RESIN: A dockerized schemaguided cross-document cross-lingual cross-media information extraction and event tracking system. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Demonstrations, pages 133-143, Online. Association for Computational Linguistics.",
1682
+ "Qitian Wu, Wentao Zhao, Zenan Li, David Wipf, and Junchi Yan. 2022. Nodeformer: A scalable graph structure learning transformer for node classification. In Advances in Neural Information Processing Systems.",
1683
+ "Chaowei Xiao, Zhongzhu Chen, Kun Jin, Jiongxiao Wang, Weili Nie, Mingyan Liu, Anima Anandkumar, Bo Li, and Dawn Song. 2022. Densepure: Understanding diffusion models towards adversarial robustness. arXiv preprint arXiv:2211.00322.",
1684
+ "Haiqin Yang, Xiaoyuan Yao, Yiqun Duan, Jianping Shen, Jie Zhong, and Kun Zhang. 2021. Progressive open-domain response generation with multiple controllable attributes. In Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, IJCAI 2021, Virtual Event/Montreal, Canada, 19-27 August 2021, pages 3279-3285. ijcai.org.",
1685
+ "Quan Yuan, Xiang Ren, Wenqi He, Chao Zhang, Xinhe Geng, Lifu Huang, Heng Ji, Chin-Yew Lin, and Jiawei Han. 2018. Open-schema event profiling for massive news corpora. In Proceedings of the 27th ACM International Conference on Information and Knowledge Management, CIKM '18, page 587-596, New York, NY, USA. Association for Computing Machinery.",
1686
+ "Muhan Zhang, Shali Jiang, Zhicheng Cui, Roman Garnett, and Yixin Chen. 2019. D-vae: A variational autoencoder for directed acyclic graphs. Advances in Neural Information Processing Systems, 32.",
1687
+ "Fangqi Zhu, Jun Gao, Changlong Yu, Wei Wang, Chen Xu, Xin Mu, Min Yang, and Ruifeng Xu. 2022. A generative approach for script event prediction via contrastive fine-tuning."
1688
+ ],
1689
+ "bbox": [
1690
+ 510,
1691
+ 85,
1692
+ 880,
1693
+ 838
1694
+ ],
1695
+ "page_idx": 9
1696
+ },
1697
+ {
1698
+ "type": "page_number",
1699
+ "text": "12639",
1700
+ "bbox": [
1701
+ 477,
1702
+ 928,
1703
+ 524,
1704
+ 940
1705
+ ],
1706
+ "page_idx": 9
1707
+ },
1708
+ {
1709
+ "type": "text",
1710
+ "text": "A For every submission:",
1711
+ "bbox": [
1712
+ 115,
1713
+ 107,
1714
+ 322,
1715
+ 122
1716
+ ],
1717
+ "page_idx": 10
1718
+ },
1719
+ {
1720
+ "type": "list",
1721
+ "sub_type": "text",
1722
+ "list_items": [
1723
+ "A1. Did you describe the limitations of your work? limitation",
1724
+ "A2. Did you discuss any potential risks of your work? limitation",
1725
+ "A3. Do the abstract and introduction summarize the paper's main claims?",
1726
+ "A4. Have you used AI writing assistants when working on this paper? Left blank."
1727
+ ],
1728
+ "bbox": [
1729
+ 129,
1730
+ 126,
1731
+ 695,
1732
+ 287
1733
+ ],
1734
+ "page_idx": 10
1735
+ },
1736
+ {
1737
+ "type": "text",
1738
+ "text": "B Did you use or create scientific artifacts?",
1739
+ "bbox": [
1740
+ 115,
1741
+ 299,
1742
+ 487,
1743
+ 315
1744
+ ],
1745
+ "page_idx": 10
1746
+ },
1747
+ {
1748
+ "type": "text",
1749
+ "text": "4,1",
1750
+ "bbox": [
1751
+ 132,
1752
+ 321,
1753
+ 159,
1754
+ 334
1755
+ ],
1756
+ "page_idx": 10
1757
+ },
1758
+ {
1759
+ "type": "list",
1760
+ "sub_type": "text",
1761
+ "list_items": [
1762
+ "B1. Did you cite the creators of artifacts you used? 4.1",
1763
+ "B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Ethics Statement",
1764
+ "B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Ethics Statement",
1765
+ "B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? No personal information exists in the current datasets",
1766
+ "B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? We follow the previous work and use the same dataset.",
1767
+ "B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Left blank."
1768
+ ],
1769
+ "bbox": [
1770
+ 129,
1771
+ 346,
1772
+ 880,
1773
+ 753
1774
+ ],
1775
+ "page_idx": 10
1776
+ },
1777
+ {
1778
+ "type": "text",
1779
+ "text": "C Did you run computational experiments?",
1780
+ "bbox": [
1781
+ 115,
1782
+ 764,
1783
+ 492,
1784
+ 781
1785
+ ],
1786
+ "page_idx": 10
1787
+ },
1788
+ {
1789
+ "type": "text",
1790
+ "text": "4",
1791
+ "bbox": [
1792
+ 132,
1793
+ 787,
1794
+ 146,
1795
+ 799
1796
+ ],
1797
+ "page_idx": 10
1798
+ },
1799
+ {
1800
+ "type": "text",
1801
+ "text": "C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? 4",
1802
+ "bbox": [
1803
+ 129,
1804
+ 810,
1805
+ 880,
1806
+ 858
1807
+ ],
1808
+ "page_idx": 10
1809
+ },
1810
+ {
1811
+ "type": "header",
1812
+ "text": "ACL 2023 Responsible NLP Checklist",
1813
+ "bbox": [
1814
+ 132,
1815
+ 84,
1816
+ 433,
1817
+ 99
1818
+ ],
1819
+ "page_idx": 10
1820
+ },
1821
+ {
1822
+ "type": "page_footnote",
1823
+ "text": "The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.",
1824
+ "bbox": [
1825
+ 112,
1826
+ 866,
1827
+ 877,
1828
+ 889
1829
+ ],
1830
+ "page_idx": 10
1831
+ },
1832
+ {
1833
+ "type": "page_number",
1834
+ "text": "12640",
1835
+ "bbox": [
1836
+ 477,
1837
+ 927,
1838
+ 524,
1839
+ 940
1840
+ ],
1841
+ "page_idx": 10
1842
+ },
1843
+ {
1844
+ "type": "text",
1845
+ "text": "C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?",
1846
+ "bbox": [
1847
+ 129,
1848
+ 84,
1849
+ 878,
1850
+ 115
1851
+ ],
1852
+ "page_idx": 11
1853
+ },
1854
+ {
1855
+ "type": "text",
1856
+ "text": "We use the commonly used hyperparameters",
1857
+ "bbox": [
1858
+ 149,
1859
+ 117,
1860
+ 478,
1861
+ 133
1862
+ ],
1863
+ "page_idx": 11
1864
+ },
1865
+ {
1866
+ "type": "text",
1867
+ "text": "C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?",
1868
+ "bbox": [
1869
+ 129,
1870
+ 142,
1871
+ 882,
1872
+ 190
1873
+ ],
1874
+ "page_idx": 11
1875
+ },
1876
+ {
1877
+ "type": "text",
1878
+ "text": "4",
1879
+ "bbox": [
1880
+ 151,
1881
+ 192,
1882
+ 166,
1883
+ 204
1884
+ ],
1885
+ "page_idx": 11
1886
+ },
1887
+ {
1888
+ "type": "text",
1889
+ "text": "C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)?",
1890
+ "bbox": [
1891
+ 127,
1892
+ 218,
1893
+ 882,
1894
+ 265
1895
+ ],
1896
+ "page_idx": 11
1897
+ },
1898
+ {
1899
+ "type": "text",
1900
+ "text": "Not applicable. Left blank.",
1901
+ "bbox": [
1902
+ 149,
1903
+ 267,
1904
+ 349,
1905
+ 282
1906
+ ],
1907
+ "page_idx": 11
1908
+ },
1909
+ {
1910
+ "type": "text",
1911
+ "text": "D Did you use human annotators (e.g., crowdworkers) or research with human participants?",
1912
+ "bbox": [
1913
+ 112,
1914
+ 293,
1915
+ 877,
1916
+ 310
1917
+ ],
1918
+ "page_idx": 11
1919
+ },
1920
+ {
1921
+ "type": "text",
1922
+ "text": "Left blank.",
1923
+ "bbox": [
1924
+ 132,
1925
+ 313,
1926
+ 213,
1927
+ 329
1928
+ ],
1929
+ "page_idx": 11
1930
+ },
1931
+ {
1932
+ "type": "text",
1933
+ "text": "D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?",
1934
+ "bbox": [
1935
+ 127,
1936
+ 340,
1937
+ 882,
1938
+ 372
1939
+ ],
1940
+ "page_idx": 11
1941
+ },
1942
+ {
1943
+ "type": "text",
1944
+ "text": "No response.",
1945
+ "bbox": [
1946
+ 151,
1947
+ 374,
1948
+ 248,
1949
+ 388
1950
+ ],
1951
+ "page_idx": 11
1952
+ },
1953
+ {
1954
+ "type": "text",
1955
+ "text": "D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)?",
1956
+ "bbox": [
1957
+ 127,
1958
+ 399,
1959
+ 882,
1960
+ 447
1961
+ ],
1962
+ "page_idx": 11
1963
+ },
1964
+ {
1965
+ "type": "text",
1966
+ "text": "No response.",
1967
+ "bbox": [
1968
+ 149,
1969
+ 449,
1970
+ 248,
1971
+ 464
1972
+ ],
1973
+ "page_idx": 11
1974
+ },
1975
+ {
1976
+ "type": "text",
1977
+ "text": "D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?",
1978
+ "bbox": [
1979
+ 127,
1980
+ 475,
1981
+ 882,
1982
+ 521
1983
+ ],
1984
+ "page_idx": 11
1985
+ },
1986
+ {
1987
+ "type": "text",
1988
+ "text": "No response.",
1989
+ "bbox": [
1990
+ 149,
1991
+ 524,
1992
+ 248,
1993
+ 539
1994
+ ],
1995
+ "page_idx": 11
1996
+ },
1997
+ {
1998
+ "type": "text",
1999
+ "text": "D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?",
2000
+ "bbox": [
2001
+ 127,
2002
+ 549,
2003
+ 873,
2004
+ 565
2005
+ ],
2006
+ "page_idx": 11
2007
+ },
2008
+ {
2009
+ "type": "text",
2010
+ "text": "No response.",
2011
+ "bbox": [
2012
+ 149,
2013
+ 567,
2014
+ 248,
2015
+ 582
2016
+ ],
2017
+ "page_idx": 11
2018
+ },
2019
+ {
2020
+ "type": "text",
2021
+ "text": "D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?",
2022
+ "bbox": [
2023
+ 127,
2024
+ 592,
2025
+ 880,
2026
+ 623
2027
+ ],
2028
+ "page_idx": 11
2029
+ },
2030
+ {
2031
+ "type": "text",
2032
+ "text": "No response.",
2033
+ "bbox": [
2034
+ 149,
2035
+ 626,
2036
+ 248,
2037
+ 640
2038
+ ],
2039
+ "page_idx": 11
2040
+ },
2041
+ {
2042
+ "type": "page_number",
2043
+ "text": "12641",
2044
+ "bbox": [
2045
+ 477,
2046
+ 927,
2047
+ 522,
2048
+ 940
2049
+ ],
2050
+ "page_idx": 11
2051
+ }
2052
+ ]
2023/A Diffusion Model for Event Skeleton Generation/98150961-6381-4a26-81e7-35b4d180926d_model.json ADDED
The diff for this file is too large to render. See raw diff
 
2023/A Diffusion Model for Event Skeleton Generation/98150961-6381-4a26-81e7-35b4d180926d_origin.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:92f1184d6bb92d6906a8697a59bc8d883c968d940117eefa497afafeae21d063
3
+ size 869972
2023/A Diffusion Model for Event Skeleton Generation/full.md ADDED
@@ -0,0 +1,380 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # A Diffusion Model for Event Skeleton Generation
2
+
3
+ Fangqi Zhu $^{1,3*}$ , Lin Zhang $^{3}$ , Jun Gao $^{1}$ , Bing Qin $^{1}$ , Ruifeng Xu $^{1,2\dagger}$ , Haiqin Yang $^{3\dagger}$
4
+
5
+ <sup>1</sup> Harbin Institute of Technology, Shenzhen, China
6
+
7
+ 2 Guangdong Provincial Key Laboratory of Novel Security Intelligence Technologies
8
+
9
+ $^{3}$ International Digital Economy Academy (IDEA)
10
+
11
+ zhufangqi hitsz@gmail.com, xuruifeng@hit.edu.cn, hqyang@ieee.org
12
+
13
+ # Abstract
14
+
15
+ Event skeleton generation, aiming to induce an event schema skeleton graph with abstracted event nodes and their temporal relations from a set of event instance graphs, is a critical step in the temporal complex event schema induction task. Existing methods effectively address this task from a graph generation perspective but suffer from noise-sensitive and error accumulation, e.g., the inability to correct errors while generating schema. We, therefore, propose a novel Diffusion Event Graph Model (DEGM) to address these issues. Our DEGM is the first workable diffusion model for event skeleton generation, where the embedding and rounding techniques with a custom edge-based loss are introduced to transform a discrete event graph into learnable latent representation. Furthermore, we propose a denoising training process to maintain the model's robustness. Consequently, DEGM derives the final schema, where error correction is guaranteed by iteratively refining the latent representation during the schema generation process. Experimental results on three IED bombing datasets demonstrate that our DEGM achieves better results than other state-of-the-art baselines. Our code and data are available at https://github.com/zhufq00/EventSkeletonGeneration.
16
+
17
+ # 1 Introduction
18
+
19
+ Event schema induction is to identify common patterns and structures in event data, which can extract high-level representation of the events. Current event schema induction tasks mainly focus on simple event schemas, e.g., templates (Chambers, 2013) and scripts (Chambers and Jurafsky, 2009). However, real-world events are usually more complex, which include multiple atomic events, entities, and their relations, which require more advanced
20
+
21
+ ![](images/78b97d1808c819b04a19e048949b69c19729cd6dc73515e4e75d795e353de76c.jpg)
22
+ Figure 1: An illustrated example demonstrates the utilization of multiple instance graphs extracted from news articles depicting complex events to generate an event schema skeleton graph for the complex event type Car bombing. The presented instance graph specifically represents the complex event known as the Kabul ambulance bombing. A circle symbolizes an atomic event.
23
+
24
+ techniques to adequately capture and represent the different aspects and relations involved.
25
+
26
+ Recently, Li et al. (2021) propose the temporal complex event schema induction task in order to understand these complex events. The task seeks to abstract a general evolution pattern for complex events from multiple event instance graphs. It is divided into two subtasks: event skeleton generation and entity-entity relation completion. The first task focuses on creating the event skeleton, i.e., representing each atomic event with its associated event type as an event node and exploring their temporal relations. The second one is to complete entities and entity links for the event skeleton. In this paper, we focus on event skeleton generation as it is a prerequisite yet formidable task in temporal complex event schema induction. Figure 1 illustrates
27
+
28
+ an example of instance graphs<sup>1</sup> and the corresponding abstracted schema. Both include abstract event types, such as Attack, and their temporal relations, like Injure happening after Attack.
29
+
30
+ Event skeleton generation requires a deep understanding of events and their multi-dimensional relations. Previous methods employ autoregressive graph generation models to generate a schema, sequentially generating event nodes from the previous ones. For example, Li et al. (2021) generate the event node with its potential arguments and propagates edge-aware information within the temporal orders. Jin et al. (2022) improves this approach by applying a Graph Convolutional Network (GCN) to better capture structural information in instance graphs and adopting a similar autoregressive generation approach to generate event graphs. However, autoregressive generation methods for event skeleton generation result in errors accumulating over time, which may degrade the performance of the generation model. For instance, as shown in Figure 1, the model may mistakenly generate "Explode" as "Die", causing it to fail to generate subsequent events correctly. Intuitively, as the number of event nodes increases, the error accumulation becomes more severe. This comes from two factors. The first one is error propagation in the autoregressive graph generation models because they are noise-sensitive and strongly rely on the correctness of the generated node. If the model generates an incorrect node, it will lead to a cascading effect of errors in generating the schema. Robustness is a serious issue in autoregressive methods. The second factor is the model's inability to correct errors in the generation procedure. Hence, we need a model, which can correct the generated event-type nodes during generating.
31
+
32
+ To this end, we propose a novel event graph generation model, dubbed Diffusion Event Graph Model (DEGM), to address these issues. To battle the model's robustness, we propose a diffusion-based method, inspired by the outstanding performance in recent research (Sun et al., 2022; Xiao et al., 2022). By carefully selecting the amount of Gaussian noise in the diffusion process, the model can remove adversarial perturbations, thereby increasing the model's robustness. However, there are still two challenges in applying this method directly to the event graph: (1) mapping the discrete
33
+
34
+ graph structures and event types to a continuous space, and (2) finding a way to recover the event graph from the continuous space. We then develop the denoising stage, including converting the event graph into a sequence and applying an embedding technique to project it to the continuous space. Additionally, we introduce a custom edge-based loss function to capture the missing structural information during the transformation. To tackle the second challenge, we develop a rounding technique to predict the event types based on their representation and a pre-trained classifier to predict the event edges. To address the second issue, we derive the final schema, which guarantees error correction, by iteratively refining the latent representation.
35
+
36
+ We summarize our contributions as follows:
37
+
38
+ - We propose a novel Diffusion Event Graph model (DEGM) for event skeleton generation, in which a denoising training stage guarantees the model's robustness and the schema generation process fulfills error correction via iterative refinement on the latent representation.
39
+ - We are the first to tackle event skeleton generation via diffusion models, where we convert an event graph from discrete nodes to latent variables in a continuous space and train the model parameters by optimizing the event sequence reconstruction and graph structure reconstruction simultaneously.
40
+ - Experimental results on the event skeleton generation task demonstrate that our approach achieves better results than state-of-the-art baselines.
41
+
42
+ # 2 Preliminaries and Problem Statement
43
+
44
+ # 2.1 Diffusion Models in a Continuous Space
45
+
46
+ A diffusion model typically consists of forward and reverse processes. Given data $\mathbf{x}_0\in \mathbb{R}^d$ , the forward process gradually adds noise to $\mathbf{x}_0$ to obtain a sequence of latent variables in $\mathbb{R}^d$ $\mathbf{x}_1,\dots ,\mathbf{x}_T$ where $\mathbf{x}_T$ is a Gaussian noise. Formally, the forward process can be attained by $q(\mathbf{x}_t\mid \mathbf{x}_{t - 1}) =$ $\mathcal{N}\left(\mathbf{x}_t;\sqrt{1 - \beta_t}\mathbf{x}_{t - 1},\beta_t\mathbf{I}\right)$ , where $\beta_{t}$ controls the noise level at the $t$ -th step. Denote $\alpha_{t} = 1 - \beta_{t}$ and $\overline{\alpha}_t = \sum_{s = 1}^t\alpha_s$ , we can directly obtain $\mathbf{x}_t$ as $q\left(\mathbf{x}_t\mid \mathbf{x}_0\right) = \mathcal{N}\left(\sqrt{\overline{\alpha}_t}\mathbf{x}_0,1 - \overline{\alpha}_t\mathbf{I}\right)$ . After the forward process is completed, the reverse denoising process can be formulated as $p_\theta (\mathbf{x}_{t - 1}\mid \mathbf{x}_t) =$ $\mathcal{N}(\mathbf{x}_{t - 1};\mu_{\theta}(\mathbf{x}_{t},t),\Sigma_{\theta}(\mathbf{x}_{t},t))$ where $\mu_{\theta}(\cdot)$ and $\Sigma_{\theta}(\cdot)$ can be implemented using a neural network.
47
+
48
+ # 2.2 Diffusion Models in a Discrete Space
49
+
50
+ For discrete data, e.g., text, Li et al. (2022) employ embedding and rounding techniques to map the text to a continuous space, which can also be recovered.
51
+
52
+ Given the embedding of the text $\mathbf{w}$ , $\mathrm{EMB}(\mathbf{w})$ , and suppose $\mathbf{x}_0$ is computed as $q(\mathbf{x}_0|\mathbf{w}) = \mathcal{N}(\mathbf{x}_0; \mathbf{w}, \beta_0\mathbf{I})$ , the corresponding training objective is
53
+
54
+ $$
55
+ \begin{array}{l} \mathcal {L} _ {\mathbf {x} _ {0}: \text {s i m p l e}} ^ {\mathrm {e 2 e}} (\mathbf {w}) = \underset {q _ {\phi} (\mathbf {x} _ {0: T} | \mathbf {w})} {\mathbb {E}} \left[ \sum_ {t = 2} ^ {T} [ \| \mathbf {x} _ {0} - f _ {\theta} (\mathbf {x} _ {t}, t) \| ^ {2} ] \right] + \\ \underset {q _ {\phi} \left(\mathbf {x} _ {0: 1} \mid \mathbf {w}\right)} {\mathbb {E}} \left[ \left| \left| \operatorname {E M B} (\mathbf {w}) - f _ {\theta} \left(\mathbf {x} _ {1}, 1\right) \right| \right| ^ {2} - \log p _ {\theta} \left(\mathbf {w} \mid \mathbf {x} _ {0}\right) \right]. \tag {1} \\ \end{array}
56
+ $$
57
+
58
+ The first expectation is to train the predicted model $f_{\theta}(\mathbf{x}_t, t)$ to fit $\mathbf{x}_0$ from 2 to $T$ . Empirically, it can effectively reduce rounding errors (Li et al., 2022). The second expectation consists of two terms: the first item makes the predicted $\mathbf{x}_0$ , i.e., $f_{\theta}(\mathbf{x}_1, 1)$ , closer to the embedding $\mathrm{EMB}(\mathbf{w})$ while the second item aims to correctly round $\mathbf{x}_0$ to the text $\mathbf{w}$ .
59
+
60
+ # 2.3 Problem Statement
61
+
62
+ Event skeleton generation is a subtask of temporal complex event schema induction (Li et al., 2021). It aims to automatically induce a schema from instance graphs for a given complex event type, where a complex event type encompasses multiple complex events; see an example of car-bombing shown in Fig. 1. An event schema skeleton consists of nodes for atomic event types and edges for their temporal relations. Since event skeleton generation is a prerequisite yet challenging task in the temporal complex event schema induction task, we focus on this task in our work.
63
+
64
+ Formally, let $G = (\mathcal{N}, \mathcal{E})$ be an instance graph with $N = |\mathcal{N}|$ nodes in $\mathcal{N}$ and $\mathcal{E}$ be the set of directed edges, one can obtain the corresponding adjacency matrix, $\mathbf{A} = \{a_{ij}\} \in \{0,1\}^{N \times N}$ , where $a_{ij} = 1$ if $edge(i,j) \in \mathcal{E}$ and $a_{ij} = 0$ otherwise. Due to temporal relations, $G$ is a directed acyclic graph (DAG), and $\mathbf{A}$ is an upper triangular matrix. Each node $n \in \mathcal{N}$ represents an atomic event and is assigned with an event type $n_e \in \Phi$ , where $\Phi$ denotes the set of event types. The type of each atomic event is abstracted by the DARPA KAIROS ontology based on its event mention. In practice, we extract a set of instance graphs $\mathcal{G}$ as outlined in Sec. 4.1 from news articles, where each instance graph $G \in \mathcal{G}$ describes a complex event,
65
+
66
+ e.g.,Kabul ambulance bombing as shown Fig.1. Given an instance graph set $\mathcal{G} = \{G_1,G_2,\dots \}$ our goal is to generate a schema $S$ that outlines the underlying evolution pattern of complex events under the given complex event type.
67
+
68
+ # 3 Method
69
+
70
+ We propose Diffusion Event Graph Model (DEGM) to tackle the event skeleton generation task. Our DEGM is capable of generating temporal event graphs from random noise. Fig. 2 illustrates an overview of our DEGM.
71
+
72
+ # 3.1 Denoising Training
73
+
74
+ The denoising training stage consists of three steps to reconstruct the event sequence and graph structure: 1) mapping the event graph into its embedding representation in a continuous space; 2) performing a forward step to obtain the latent variables, or representation with various levels of noise; 3) conducting the denoising step to remove the introduced noise from latent representation.
75
+
76
+ Embedding representation Given an instance graph $G$ , we first convert it into a sequence of $m$ events, $E = [e_1,e_2,\dots ,e_m]$ , where $e_i$ denotes the event type of node $i$ , via topological sorting. We then project $E$ into its embedding representation in a continuous embedding space,
77
+
78
+ $$
79
+ \mathbf {e} = \left[ \mathrm {E M B} _ {e} \left(e _ {1}\right), \dots , \mathrm {E M B} _ {e} \left(e _ {m}\right) \right] \in \mathbb {R} ^ {d \times m}, \tag {2}
80
+ $$
81
+
82
+ where $d$ is the representation size. Note that $m$ is a preset number of nodes to ensure all graphs are well-aligned. For graphs with less than $m$ nodes, we pad them by a pre-defined event type: PAD, which makes the total number of event types, $M = |\Phi| + 1$ .
83
+
84
+ Forward Step After obtaining the embedded event sequence $\mathbf{e}$ , we deliver the forward process in the diffusion framework to acquire a sequence of latent variables by monotonically increasing the level of introduced noise. We sample variables of $\mathbf{x}_0$ and $\mathbf{x}_t$ via
85
+
86
+ $$
87
+ q \left(\mathbf {x} _ {0} \mid \mathbf {e}\right) = \mathcal {N} \left(\mathbf {x} _ {0}; \mathbf {e}, \beta_ {0} \mathbf {I}\right), \tag {3}
88
+ $$
89
+
90
+ $$
91
+ q \left(\mathbf {x} _ {t} \mid \mathbf {x} _ {0}\right) = \mathcal {N} \left(\mathbf {x} _ {t}; \sqrt {\bar {\alpha} _ {t}} \mathbf {x} _ {0}, (1 - \bar {\alpha} _ {t}) \mathbf {I}\right), \tag {4}
92
+ $$
93
+
94
+ where $t = 1,\dots ,T$ . Moreover, we introduce two additional embeddings to enhance the expressiveness of latent variables, i.e., the absolute position embedding $\mathbf{W}_{pos}\in \mathbb{R}^{m\times d}$ and the step embedding
95
+
96
+ ![](images/7492c747df5c742689f0940c2f5de3c9458617301aef2607aeff26de1269532f.jpg)
97
+ Figure 2: The procedure of training our DEGM. At the preprocessing step, an instance graph $G$ is converted into a temporal sequence of events e via topological sorting and the associated adjacency matrix $\mathbf{A}$ , which represents the graph structure. Following that, we perform DEGM accordingly. We first convert the discrete events into their representation in a continuous space. The forward step and the denoising step are conducted iteratively to reconstruct the event sequence and the graph structure. Note that we convert the latent variable $\mathbf{h}_{la}^{t}$ into three representations in two levels, i.e., the shared representation $\mathbf{h}_{sh}^{t}$ and two task-specific representation for the node's type $\mathbf{h}_{ty}^{t}$ and the node's structure $\mathbf{h}_{st}^{t}$ , respectively; see more details in the text.
98
+
99
+ $\mathrm{EMB}_s(t)$ . They allow us to capture the event's temporal order in the obtained event sequence and specify that it is at the $t$ -th diffusion step. Adding them together, we obtain the latent variables at $t$ -th diffusion step as
100
+
101
+ $$
102
+ \mathbf {h} _ {l a} ^ {t} = \mathbf {x} _ {t} + \mathbf {W} _ {p o s} + \operatorname {E M B} _ {s} (t). \tag {5}
103
+ $$
104
+
105
+ Denoising Step Before optimizing the two objectives, event sequence reconstruction and graph structure reconstruction, we first convert the latent variable $\mathbf{h}_{la}^{t}$ into three variables in two levels, i.e., via a shared encoder $\mathrm{E}_{sh}$ to $\mathbf{h}_{sh}^{t}$ and two task-specific encoders, the node's type encoder $\mathrm{E}_{ty}$ to $\mathbf{h}_{ty}^{t}$ and the node's structure encoder $\mathrm{E}_{st}$ to $\mathbf{h}_{st}^{t}$ . That is,
106
+
107
+ $$
108
+ \mathbf {h} _ {s h} ^ {t} = \operatorname {E} _ {s h} \left(\mathbf {h} _ {l a} ^ {t}\right), \tag {6}
109
+ $$
110
+
111
+ $$
112
+ \mathbf {h} _ {t y} ^ {t} = \operatorname {E} _ {t y} \left(\mathbf {h} _ {s h} ^ {t}\right), \tag {7}
113
+ $$
114
+
115
+ $$
116
+ \mathbf {h} _ {s t} ^ {t} = \mathrm {E} _ {s t} \left(\mathbf {h} _ {s h} ^ {t}\right). \tag {8}
117
+ $$
118
+
119
+ In the following, we outline the procedure of constructing encoders $\mathrm{E}_{sh}$ , $\mathrm{E}_{ty}$ , and $\mathrm{E}_{st}$ , each contains $l$ layer. With a little abuse of notations, we define $\mathbf{h} = [\mathbf{h}_1, \dots, \mathbf{h}_m]$ as the input representation for a layer and the corresponding output as $\mathbf{h}' = [\mathbf{h}_1', \dots, \mathbf{h}_m']$ .
120
+
121
+ Here, we utilize the graph-attention (Velicković et al., 2018) to transform the input representation into a high-level representation as follows:
122
+
123
+ $$
124
+ \mathbf {h} _ {i} ^ {\prime} = \sigma \left(\sum_ {j = 1} ^ {m} \alpha_ {i j} \mathbf {W h} _ {j}\right), \tag {9}
125
+ $$
126
+
127
+ where $\mathbf{W} \in \mathbb{R}^{d \times d}$ is a weight matrix, $\sigma(\cdot)$ is a nonlinear activation function. Here, $\alpha_{ij}$ is the attention weight defined by
128
+
129
+ $$
130
+ \alpha_ {i j} = \frac {\exp \left(\mathrm {L R} \left(\mathbf {a} ^ {T} \left[ \mathbf {W h} _ {i} \| \mathbf {W h} _ {j} \right]\right)\right)}{\sum_ {k = 1} ^ {m} \exp \left(\mathrm {L R} \left(\mathbf {a} ^ {T} \left[ \mathbf {W h} _ {i} \| \mathbf {W h} _ {k} \right]\right)\right)}, \tag {10}
131
+ $$
132
+
133
+ where $\mathbf{a} \in \mathbb{R}^{2d}$ is a weight vector, LR is the LeakyReLU activation function, and $||$ denotes the concatenation operation. We compute attention weights in this way instead of relying on the inner product to prevent higher attention weights between atomic events of the same event type $^3$ , which is not appropriate for constructing the event graph. For instance, the attention weight between two independent Attack events should be less than the weight of one Attack and its successor events.
134
+
135
+ After attaining $\mathbf{h}_{ty}^{t},\mathbf{h}_{st}^{t}$ , via $\mathrm{E}_{ty}$ and $\mathrm{E}_{st}$ , respectively, we compute two losses, the event sequence reconstruction loss $\mathcal{L}_{ty}^{t}(G)$ and the graph structure reconstruction loss $\mathcal{L}_{st}^{t}(G)$ at the $t$ -th diffusion step as:
136
+
137
+ $$
138
+ \mathcal {L} _ {t y} ^ {t} (G) = \text {C r o s s E n t r o p y} \left(\mathbf {h} _ {t y} ^ {t}, \mathbf {W} _ {e} ^ {T}, E\right), \tag {11}
139
+ $$
140
+
141
+ $$
142
+ \mathcal {L} _ {s t} ^ {t} (G) = \frac {2}{(m - 1) ^ {2}} \sum_ {i = 1} ^ {m - 1} \sum_ {j = i + 1} ^ {m} (\operatorname {M L P} \left(\mathbf {h} _ {s t i} ^ {t} \| \mathbf {h} _ {s t j} ^ {t}\right) - a _ {i j}) ^ {2}. \tag {12}
143
+ $$
144
+
145
+ The objective of $\mathcal{L}_{ty}^t (G)$ in Eq. (11) is to reduce the difference between the ground truth $E$ and
146
+
147
+ $\mathbf{h}_{ty}^{t}\mathbf{W}_{e}^{T}\in \mathbb{R}^{m\times M}$ , which represents the probabilities of each node belonging to each event type. It is worth noting that $\mathcal{L}_{ty}^{t}(G)$ offers a simplified version of the training objective outlined in Eq. (1), and empirically improves the quality of the generated schemas. Meanwhile, the objective of $\mathcal{L}_{st}^{t}(G)$ in Eq. (12) aims to predict the probability of a directed edge from node $i$ to node $j$ and fit their adjacency matrix value $a_{ij}\in \mathbf{A}$ . Finally, we obtain the model by minimizing the following loss:
148
+
149
+ $$
150
+ \mathcal {L} = \sum_ {G \in \mathcal {G}} \sum_ {t = 1} ^ {T} \mathcal {L} _ {t y} ^ {t} (G) + \lambda \mathcal {L} _ {s t} ^ {t} (G), \tag {13}
151
+ $$
152
+
153
+ where $T$ denotes the total diffusion steps and $\lambda$ is a constant to balance the two objectives. When training our model, we randomly select a few instance graphs and then sample a diffusion step $t$ for each of these graphs. We then minimize Eq. (13) to update the model's weights until it converges.
154
+
155
+ # 3.2 Schema Generation
156
+
157
+ We start the schema generation procedure from $\tilde{\mathbf{h}}_{la}^{T}\in \mathbb{R}^{m\times d}$ , which are sampled from Gaussian noise. We then compute its shared representation $\tilde{\mathbf{h}}_{sh}^{t}$ and the node type representation $\tilde{\mathbf{h}}_{ty}^{t}$ at the $t$ -th diffusion step reversely:
158
+
159
+ $$
160
+ \tilde {\mathbf {h}} _ {s h} ^ {t} = \mathrm {E} _ {s h} \left(\tilde {\mathbf {h}} _ {l a} ^ {t} + \mathbf {W} _ {p o s} + \mathbf {E M B} _ {s} (t)\right), \tag {14}
161
+ $$
162
+
163
+ $$
164
+ \tilde {\mathbf {h}} _ {t y} ^ {t} = \mathrm {E} _ {t y} \left(\tilde {\mathbf {h}} _ {s h} ^ {t}\right), \tilde {\mathbf {h}} _ {l a} ^ {t - 1} = \tilde {\mathbf {h}} _ {t y} ^ {t}, t = T, \dots , 1. \tag {15}
165
+ $$
166
+
167
+ After $T$ denoising steps, we obtain the final representation $\tilde{\mathbf{h}}_{sh}^{0}$ , $\tilde{\mathbf{h}}_{ty}^{0}$ , and compute $\tilde{\mathbf{h}}_{st}^{0} = \mathrm{E}_{st}(\tilde{\mathbf{h}}_{sh}^{0})$ .
168
+
169
+ Next, we apply the node type representation $\tilde{\mathbf{h}}_{ty}^{0}$ and the structure representation $\tilde{\mathbf{h}}_{st}^{0}$ to generate the schema. First, with $\tilde{\mathbf{h}}_{ty}^{0} = [\tilde{\mathbf{h}}_{ty}^{1},\dots,\tilde{\mathbf{h}}_{ty}^{m}]\in \mathbb{R}^{m\times d}$ , we obtain each event's type $e_i\in \tilde{E}$ by assigning the event type whose embedding is nearest to $\tilde{\mathbf{h}}_{ty}^{i}$ as:
170
+
171
+ $$
172
+ e _ {i} = \underset {e _ {j} \in \Phi} {\arg \min } \left(\| \tilde {\mathbf {h}} _ {t y} ^ {i} - \mathrm {E M B} _ {e} \left(e _ {j}\right) \|\right). \tag {16}
173
+ $$
174
+
175
+ Second, with $\tilde{\mathbf{h}}_{st}^{0} = [\tilde{\mathbf{h}}_{st}^{1},\dots ,\tilde{\mathbf{h}}_{st}^{m}]\in \mathbb{R}^{m\times d}$ , we predict the directed edge from node $i$ to node $j$ where $i < j$ by using a pre-trained classifier MLP trained via Eq. (12) as follows:
176
+
177
+ $$
178
+ \beta_ {i j} = \left\{ \begin{array}{l} 1, \operatorname {M L P} \left(\tilde {\mathbf {h}} _ {s t} ^ {i} \left\| \tilde {\mathbf {h}} _ {s t} ^ {j}\right)\right) > \tau \\ 0, \text {o t h e r w i s e}, \end{array} \right., \tag {17}
179
+ $$
180
+
181
+ where $\tau$ is a threshold to determine the final edges and $\beta_{ij} \in \tilde{\mathbf{A}}$ is the adjacency matrix value of the
182
+
183
+ generated schema. We generate the schema from the reconstructed event sequence $\tilde{E}$ and adjacency matrix $\tilde{\mathbf{A}}$ , and remove PAD type events and the edges associated with them and derive the final schema $S$ .
184
+
185
+ # 4 Experiments
186
+
187
+ # 4.1 Datasets
188
+
189
+ We conduct experiments to evaluate our model in three IED bombings datasets (Li et al., 2021; Jin et al., 2022). Each dataset associates with a distinct complex event type: General IED, Car bombing IED, and Suicide IED. Taking the complex event type Car bombing IED as an example, to construct the corresponding dataset, we need to build an instance graph set, where each instance graph describes a complex event, e.g., Kabul ambulance bombing. Li et al. (2021) first identify some complex events related to the complex event type based on Wikipedia. Then, each instance graph is constructed from the reference news articles in Wikipedia pages related to the complex event. Specifically, Li et al. (2021) utilized the state-of-the-art information extraction system RESIN (Wen et al., 2021) to extract atomic events, represented as event types, and their temporal relations from news articles, and finally obtained the instance graph set. Next, a human curation is performed to ensure the soundness of the instance graphs (Jin et al., 2022). We utilize the released curated datasets for our experiments and follow previous work (Jin et al., 2022) to split the data into train, validation, and test sets. The statistics of the three datasets are summarized in Table 1.
190
+
191
+ <table><tr><td>Datasets</td><td>General-IED</td><td>Car-IED</td><td>Suicide-IED</td></tr><tr><td>train/val/test instance graphs</td><td>88/11/12</td><td>75/9/10</td><td>176/22/22</td></tr><tr><td>Avg e nodes/ee links per graph</td><td>90.8/212.6</td><td>146.5/345.7</td><td>117.4/245.2</td></tr></table>
192
+
193
+ Table 1: The statistics for the three datasets. "e" and "ee" denote event and event-event, respectively.
194
+
195
+ # 4.2 Baselines
196
+
197
+ We compare our method with the following strong baselines:
198
+
199
+ - Temporal Event Graph Model (TEGM) (Li et al., 2021): TEGM is based on an autoregressive method that step-by-step generates event and edges between newly generated event and existing events and subsequently uses greedy decoding to obtain the schema, starting from a specially predefined START event.
200
+
201
+ - Frequency-Based Sampling (FBS) (Jin et al., 2022): FBS first counts the occurrence frequency of edges between two event types in the train set. Then the schema is constructed in which each node corresponds to one event type, and initially, the schema does not have any edges. After that, FBS samples one pair of event types based on the occurrence frequency of edges and adds an edge between the corresponding nodes into the schema. The process is repeated until the newly added edge resulting in a cycle in the schema.
202
+ - DoubleGAE (Jin et al., 2022): DoubleGAE generates an event graph based on DVAE (Zhang et al., 2019). They first use a directed GCN encoder to obtain the mean and variance of the event graph's latent variables, and then according to the sampled latent variables to recover the event graph in an autoregressive paradigm, similar to TEGM. Finally, they obtain the schema by feeding the hidden variables sampled from Gaussian noise into the model.
203
+
204
+ # 4.3 Experimental Setup
205
+
206
+ Quantitative metrics. We train our model in the train set for a given dataset and then generate the schema according to Sec. 3.2. To evaluate the quality of the schema, we compare the schema with the instance graphs in the test set using the following metrics:
207
+
208
+ (1) Event type match. We compute the set of event types in the generated schema and the set for a test instance graph and compute the F1 score between the two sets to see whether our schema contains the event types in the real-word complex events.
209
+ (2) Event sequence match. We compute the set of event sequences with a length 2 (or 3) in the generated schema, as well as the set for a test instance graph, and compute the F1 scores between the two sets to measure how the schema captures substructures in the test instance graphs.
210
+
211
+ Note that we calculate the average values of each metric above between the generated schema and each instance graph in the test set as the final results. We generate a set of candidate schemas and test their performance on the validation set, and select the best-performing one as the final schema for the focused complex event type.
212
+
213
+ Implementation Details. For our DEGM, the representation dimension $d$ is 256. The number of
214
+
215
+ encoder layers, $l$ , is set to 4. The graph structure reconstruction loss weight $\lambda$ is 1, and the edge classification threshold $\tau$ is 0.8. The learning rate is 1e-4 and the number of training epochs is 100. All hyperparameters are chosen based on the validation set. We select the best checkpoint, and the best-performing schema on the validation set according to the event type match (F1) metric. The maximum number of graph nodes $m$ is 50, and the number of our candidate schema is 500 following Jin et al. (2022). The event type in DARPA KAIROS ontology is 67. We define the noise schedule as $\overline{\alpha}_t = 1 - \sqrt{t + 1 / T}$ following Li et al. (2022) and the total diffusion step $T$ is 100. All the experiments are conducted on Tesla A100 GPU with 40G memory.
216
+
217
+ <table><tr><td rowspan="2">Datasets</td><td rowspan="2">Methods</td><td rowspan="2">Event type match (F1)</td><td colspan="2">Event seq match (F1)</td></tr><tr><td>l=2</td><td>l=3</td></tr><tr><td rowspan="5">General-IED</td><td>TEGM</td><td>0.638</td><td>0.181</td><td>0.065</td></tr><tr><td>FBS</td><td>0.617</td><td>0.149</td><td>0.064</td></tr><tr><td>DoubleGAE</td><td>0.697</td><td>0.273</td><td>0.128</td></tr><tr><td>Ours avg</td><td>0.726±0.018</td><td>0.361±0.020</td><td>0.137±0.009</td></tr><tr><td>Ours</td><td>0.754±0.008</td><td>0.413±0.010</td><td>0.153±0.016</td></tr><tr><td rowspan="5">Car-IED</td><td>TEGM</td><td>0.588</td><td>0.162</td><td>0.044</td></tr><tr><td>FBS</td><td>0.542</td><td>0.126</td><td>0.038</td></tr><tr><td>DoubleGAE</td><td>0.674</td><td>0.259</td><td>0.081</td></tr><tr><td>Ours avg</td><td>0.754±0.008</td><td>0.413±0.010</td><td>0.153±0.016</td></tr><tr><td>Ours</td><td>0.795±0.002</td><td>0.483±0.030</td><td>0.357±0.063</td></tr><tr><td rowspan="5">Suicide-IED</td><td>TEGM</td><td>0.609</td><td>0.174</td><td>0.048</td></tr><tr><td>FBS</td><td>0.642</td><td>0.164</td><td>0.036</td></tr><tr><td>DoubleGAE</td><td>0.709</td><td>0.290</td><td>0.095</td></tr><tr><td>Ours avg</td><td>0.744±0.009</td><td>0.464±0.015</td><td>0.195±0.052</td></tr><tr><td>Ours</td><td>0.775±0.005</td><td>0.534±0.011</td><td>0.330±0.033</td></tr></table>
218
+
219
+ Table 2: Results of all methods for the three datasets. Our results include the mean and variance under five different random seeds, while other methods' results are from previous work. The best results are in bold.
220
+
221
+ # 4.4 Results and Analysis
222
+
223
+ Table 2 reports the main results of our model and shows some notable observations: (1) Our model has achieved significant progress compared to the baselines across three datasets and three metrics; (2) The average performance of the generated candidate schemas also performs better than previous methods. The reasons for the first observation can be attributed to the ability of our model to iteratively refine the generated schema, enabling the node types and edges between nodes to better match the evolution pattern of the unseen complex events, resulting in superior performance on the test set. In contrast, Temporal Event Graph Model (TEGM) can only generate the next event based on the partially generated event graph during training and generation. DoubleGAE has
224
+
225
+ ![](images/a26af6790aec98a6b6d35e885d8eb5ddc32ab56bc3164188534abf6fee664cd9.jpg)
226
+
227
+ ![](images/3d4b63ccfba2c57a708cb7946e50437b6586e3e55d4fae13f3068cdd30afd9b3.jpg)
228
+ Figure 3: To investigate the impact of topological sorting, we extend the train set by obtaining multiple (isomorphic graph number) isomorphic instance graphs sorted from one original train instance graph. We train and test our model on the extended dataset. All results are mean values under five different random seeds.
229
+
230
+ ![](images/72fadbdeced651c33c604c703d90fc1dcccfa7952defe03add01379613959719.jpg)
231
+
232
+ ![](images/c00f5ab60c339a67a28a980810bc004b65d45767d0b89077a7e6a61f135e9e83.jpg)
233
+
234
+ ![](images/c2562dbd8010208fce762ef42309ef773ef37829c7d4c06fbf1ea157e87704e4.jpg)
235
+ Figure 4: We measure the impact of our simplified node type objective and a design choice which means we denoise the schema based on the structure representation. We find that both are crucial for improving the event type match (F1) metric.
236
+
237
+ improved this problem by utilizing the encoder structure to capture the global structure of instance graphs. However, DoubleGAE still employs a similar generation procedure as TEGM during schema generation, resulting in a substantial performance gap with our method. Meanwhile, the performance of FBS is much lower than our method, indicating that the heuristic approach is challenging to generate such a schema, demonstrating the necessity for probabilistic modeling for the event graphs.
238
+
239
+ For the second observation, we claim that our model is proficient in modeling the distribution of instance graphs. Also, selecting the best-performing schema based on the validation set helps immensely, especially for the event type match (F1) $(l = 3)$ metric. This may be because this metric is more sensitive to the gap between the truth distribution of instance graphs and the modeled distribution, and selecting schema based on the validation set reduces the gap.
240
+
241
+ # 4.5 Ablation Studies
242
+
243
+ We verify the importance of our simplified training objective and a design choice while generating the schema through two ablation studies. As shown in Figure 4, we can observe that our simplified training objective $\mathcal{L}_{ty}^t (G)$ in Eq. 11 performs significantly better than the original one Eq. 1. This may be due to the fact that the original training objective includes three optimization objectives, while ours includes only one. And too many optimization objectives may lead to a larger loss variance, resulting in difficulty in convergence and thus degrading the performance. At the same time, both training objectives share the same goal: to maximize the model's ability to reconstruct the original event sequence at each diffusion step.
244
+
245
+ Besides, we also investigate an alternative which we assign $\mathbf{h}_{la}^{t-1}$ as $\mathbf{h}_{st}^t$ in Eq. (15) while generating schema. We aim to explore whether it would be better to denoise based on the structure representation $\mathbf{h}_{st}^t$ . However, this leads to a collapse of the event type match (F1) metric as in Figure 4. Probably due to the model is trained based on the embedded event sequence to reconstruct the event sequence and its graph structure. Therefore, the model prefers to denoise based on the node type representation $\mathbf{h}_{ty}^t$ .
246
+
247
+ # 4.6 Impact of Topological Sorting
248
+
249
+ Our approach, as well as previous autoregressive graph generation methods, all require a topological sorting of the instance graph to obtain a sorted version of the graph that is not unique. Therefore, we want to investigate whether the model's performance is affected when we train our model with multiple isomorphic instance graphs randomly
250
+
251
+ sorted from one instance graph. Getting $n$ randomly sorted instance graphs from one instance graph is equivalent to expanding the training set $n$ times. We test our model's performance respectively by setting the $n$ range from 1 to 9. As shown in Figure 3, however, we observe that training our model on the expanded training set hardly affects the model's performance across all three datasets and three metrics. Indicating that our model captures the evolution pattern of the instance graph based only on one sorted instance graph.
252
+
253
+ # 4.7 Error Analysis and Case Study
254
+
255
+ ![](images/19bf6b6acd7eefbb114c81e27d7a48bd3c8b1d00d3db75dfed39af3bbdf6f9d8.jpg)
256
+ Figure 5: A snippet of schema generated by DEGM.
257
+
258
+ In Figure 5, we present a snippet of the schema generated by our model. From this, we can observe two phenomena: (1) The generated schema contains precise types of atomic events and the common substructures.(2) The model has a tendency to generate repeated subsequent events and substructures. The superior performance of our model is revealed by the first phenomenon, which demonstrates its ability to accurately generate both events and substructures. However, the second phenomenon highlights a drawback of the model, which is its tendency to produce duplicate substructures and events. Further analysis revealed that this repetitive structure is caused by a high number of repetitive substructures in the training set, due to the fact that the instance graphs used were extracted from news articles, which can be noisy. As a result, the model learns to replicate these patterns.
259
+
260
+ # 5 Related Work
261
+
262
+ According to Jin et al. (2022), event schema induction can be divided into three categories: (1) atomic event schema induction (Chambers, 2013; Cheung et al., 2013; Nguyen et al., 2015; Sha et al., 2016; Yuan et al., 2018) has focused on inducing an event template, called atomic event schema, for multiple similar atomic events. The template includes an abstracted event type and a set of entity roles
263
+
264
+ shared by all atomic events, while ignoring the relations between events. (2) narrative event schema induction (Chambers and Jurafsky, 2008, 2009; Jans et al., 2012; Rudinger et al., 2015; Granroth-Wilding and Clark, 2016; Zhu et al., 2022; Gao et al., 2022a,b; Long et al., 2022; Yang et al., 2021), in contrast, pays attention to the relations between events. In this task, schema is defined as a narrative-ordered sequence of events, with each event including its entity roles. However, complex events in real-world scenarios often consists of multiple events and entities with innerwined relations.
265
+
266
+ To under such complex events, Li et al. (2020) incorporate graph structure into schema definition. However, they only consider the relations between two events and their entities. (3) temporal complex event induction task, recently, Li et al. (2021) propose this task in which a schema consists of events, entities, the temporal relations between events, relations between entities, and relations between event and entity (i.e., argument). Each event and entity is abstracted as an event type or entity type, and each event type contains multiple predefined arguments associated with entities. To address this issue, Li et al. (2021) generates the schema event by event. Each time an event is generated, the model links it to existing events, expands it with predefined arguments and entities, and links the entities to existing nodes. This approach leads to the entities' inability to perceive the events' position, resulting in entities cannot distinguish between events of the same type. Therefore (Jin et al., 2022) divide the task into two stages: event skeleton generation and entity-entity relation completion. In the first stage, they employ an autoregressive directed graph generation method (Zhang et al., 2019) to generate the schema skeleton, including events and their relations. In the second stage, they expand the schema skeleton with predefined arguments and entities and complete the remaining relations via a link prediction method VGAE (Kipf and Welling, 2016).
267
+
268
+ The above event graph induction methods suffer from error accumulation due to the limitations of the autoregressive schema generation paradigm. To address this issue, we propose DEGM which utilizes a denoising training process to enhance the model's robustness to errors and a schema generation process to continuously correct the errors in the generated schema.
269
+
270
+ # 6 Conclusions
271
+
272
+ We propose Diffusion Event Graph Model, the first workable diffusion model for event skeleton generation. A significant breakthrough is to convert the discrete nodes in event instance graphs into a continuous space via embedding and rounding techniques and a custom edge-based loss. The denoising training process improves model robustness. During the schema generation process, we iteratively correct the errors in the schema via latent representation refinement. Experimental results on the three IED bombing datasets demonstrate that our approach achieves better results than other state-of-the-art baselines.
273
+
274
+ # Limitations
275
+
276
+ Our proposed DEGM for event skeleton generation still contains some limitations:
277
+
278
+ - It only considers the problem of event skeleton generation, a subtask of temporal complex event schema induction. It is promising to explore the whole task, which includes entities and entity-event relations.
279
+ - Perspective from errors found that our model suffers from a tendency to generate correct duplicate substructures.
280
+
281
+ # Ethics Statement
282
+
283
+ We follow the ACL Code of Ethics. In our work, there are no human subjects and informed consent is not applicable.
284
+
285
+ # 7 Acknowledgments
286
+
287
+ The work was fully supported by the IDEA Information and Super Computing Centre (ISCC) and was partially supported by the National Nature Science Foundation of China (No. 62006062, 62176076, 62201576), Natural Science Foundation of GuangDong 2023A1515012922, the Shenzhen Foundational Research Funding (JCYJ20220818102415032, JCYJ20200109113441941), the Major Key Project of PCL2021A06, Guangdong Provincial Key Labo-ratory of Novel Security Intelligence Technologies 2022B1212010005.
288
+
289
+ # References
290
+
291
+ Nathanael Chambers. 2013. Event schema induction with a probabilistic entity-driven model. In Proceedings of the 2013 Conference on Empirical Methods
292
+
293
+ in Natural Language Processing, pages 1797-1807, Seattle, Washington, USA. Association for Computational Linguistics.
294
+ Nathanael Chambers and Dan Jurafsky. 2008. Unsupervised learning of narrative event chains. In Proceedings of ACL-08: HLT, pages 789-797, Columbus, Ohio. Association for Computational Linguistics.
295
+ Nathanael Chambers and Dan Jurafsky. 2009. Unsupervised learning of narrative schemas and their participants. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 602-610, Suntec, Singapore. Association for Computational Linguistics.
296
+ Jackie Chi Kit Cheung, Hoifung Poon, and Lucy Vanderwende. 2013. Probabilistic frame induction. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 837-846, Atlanta, Georgia. Association for Computational Linguistics.
297
+ Jun Gao, Wei Wang, Changlong Yu, Huan Zhao, Wilfred Ng, and Ruifeng Xu. 2022a. Improving event representation via simultaneous weakly supervised contrastive learning and clustering. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 3036-3049, Dublin, Ireland. Association for Computational Linguistics.
298
+ Jun Gao, Changlong Yu, Wei Wang, Huan Zhao, and Ruifeng Xu. 2022b. Mask-then-fill: A flexible and effective data augmentation framework for event extraction. In *Findings of the Association for Computational Linguistics: EMNLP* 2022, pages 4537–4544, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
299
+ Mark Granroth-Wilding and Stephen Clark. 2016. What happens next? event prediction using a compositional neural network model. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, February 12-17, 2016, Phoenix, Arizona, USA, pages 2727-2733. AAAI Press.
300
+ Bram Jans, Steven Bethard, Ivan Vulic, and Marie Francine Moens. 2012. Skip n-grams and ranking functions for predicting script events. In Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics, pages 336-344, Avignon, France. Association for Computational Linguistics.
301
+ Xiaomeng Jin, Manling Li, and Heng Ji. 2022. Event schema induction with double graph autoencoders. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2013-2025, Seattle, United States. Association for Computational Linguistics.
302
+
303
+ Thomas N Kipf and Max Welling. 2016. Variational graph auto-encoders. arXiv preprint arXiv:1611.07308.
304
+ Manling Li, Sha Li, Zhenhailong Wang, Lifu Huang, Kyunghyun Cho, Heng Ji, Jiawei Han, and Clare Voss. 2021. The future is not one-dimensional: Complex event schema induction by graph modeling for event prediction. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 5203-5215, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
305
+ Manling Li, Qi Zeng, Ying Lin, Kyunghyun Cho, Heng Ji, Jonathan May, Nathanael Chambers, and Clare Voss. 2020. Connecting the dots: Event graph schema induction with path language modeling. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 684-695, Online. Association for Computational Linguistics.
306
+ Xiang Lisa Li, John Thickstun, Ishaan Gulrajani, Percy Liang, and Tatsunori B Hashimoto. 2022. Diffusion improves controllable text generation. arXiv preprint arXiv:2205.14217.
307
+ Siquu Long, Feiqi Cao, Soyeon Caren Han, and Haiqin Yang. 2022. Vision-and-language pretrained models: A survey. In Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, IJCAI 2022, Vienna, Austria, 23-29 July 2022, pages 5530-5537. ijcai.org.
308
+ Kiem-Hieu Nguyen, Xavier Tannier, Olivier Ferret, and Romaric Besançon. 2015. Generative event schema induction with entity disambiguation. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 188-197, Beijing, China. Association for Computational Linguistics.
309
+ Rachel Rudinger, Pushpendre Rastogi, Francis Ferraro, and Benjamin Van Durme. 2015. Script induction as language modeling. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1681-1686, Lisbon, Portugal. Association for Computational Linguistics.
310
+ Lei Sha, Sujian Li, Baobao Chang, and Zhifang Sui. 2016. Joint learning templates and slots for event schema induction. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 428-434, San Diego, California. Association for Computational Linguistics.
311
+ Jiachen Sun, Weili Nie, Zhiding Yu, Z Morley Mao, and Chaowei Xiao. 2022. Pointdp: Diffusion-driven purification against adversarial attacks on 3d point cloud recognition. arXiv preprint arXiv:2208.09801.
312
+
313
+ Petar Velicković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. 2018. Graph attention networks. In International Conference on Learning Representations.
314
+ Haoyang Wen, Ying Lin, Tuan Lai, Xiaoman Pan, Sha Li, Xudong Lin, Ben Zhou, Manling Li, Haoyu Wang, Hongming Zhang, Xiaodong Yu, Alexander Dong, Zhenhailong Wang, Yi Fung, Piyush Mishra, Qing Lyu, Didac Suris, Brian Chen, Susan Windisch Brown, Martha Palmer, Chris Callison-Burch, Carl Vondrick, Jiawei Han, Dan Roth, Shih-Fu Chang, and Heng Ji. 2021. RESIN: A dockerized schemaguided cross-document cross-lingual cross-media information extraction and event tracking system. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Demonstrations, pages 133-143, Online. Association for Computational Linguistics.
315
+ Qitian Wu, Wentao Zhao, Zenan Li, David Wipf, and Junchi Yan. 2022. Nodeformer: A scalable graph structure learning transformer for node classification. In Advances in Neural Information Processing Systems.
316
+ Chaowei Xiao, Zhongzhu Chen, Kun Jin, Jiongxiao Wang, Weili Nie, Mingyan Liu, Anima Anandkumar, Bo Li, and Dawn Song. 2022. Densepure: Understanding diffusion models towards adversarial robustness. arXiv preprint arXiv:2211.00322.
317
+ Haiqin Yang, Xiaoyuan Yao, Yiqun Duan, Jianping Shen, Jie Zhong, and Kun Zhang. 2021. Progressive open-domain response generation with multiple controllable attributes. In Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, IJCAI 2021, Virtual Event/Montreal, Canada, 19-27 August 2021, pages 3279-3285. ijcai.org.
318
+ Quan Yuan, Xiang Ren, Wenqi He, Chao Zhang, Xinhe Geng, Lifu Huang, Heng Ji, Chin-Yew Lin, and Jiawei Han. 2018. Open-schema event profiling for massive news corpora. In Proceedings of the 27th ACM International Conference on Information and Knowledge Management, CIKM '18, page 587-596, New York, NY, USA. Association for Computing Machinery.
319
+ Muhan Zhang, Shali Jiang, Zhicheng Cui, Roman Garnett, and Yixin Chen. 2019. D-vae: A variational autoencoder for directed acyclic graphs. Advances in Neural Information Processing Systems, 32.
320
+ Fangqi Zhu, Jun Gao, Changlong Yu, Wei Wang, Chen Xu, Xin Mu, Min Yang, and Ruifeng Xu. 2022. A generative approach for script event prediction via contrastive fine-tuning.
321
+
322
+ A For every submission:
323
+
324
+ A1. Did you describe the limitations of your work? limitation
325
+ A2. Did you discuss any potential risks of your work? limitation
326
+ A3. Do the abstract and introduction summarize the paper's main claims?
327
+ A4. Have you used AI writing assistants when working on this paper? Left blank.
328
+
329
+ B Did you use or create scientific artifacts?
330
+
331
+ 4,1
332
+
333
+ B1. Did you cite the creators of artifacts you used? 4.1
334
+ B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Ethics Statement
335
+ B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Ethics Statement
336
+ B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? No personal information exists in the current datasets
337
+ B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? We follow the previous work and use the same dataset.
338
+ B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Left blank.
339
+
340
+ C Did you run computational experiments?
341
+
342
+ 4
343
+
344
+ C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? 4
345
+
346
+ C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values?
347
+
348
+ We use the commonly used hyperparameters
349
+
350
+ C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run?
351
+
352
+ 4
353
+
354
+ C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)?
355
+
356
+ Not applicable. Left blank.
357
+
358
+ D Did you use human annotators (e.g., crowdworkers) or research with human participants?
359
+
360
+ Left blank.
361
+
362
+ D1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.?
363
+
364
+ No response.
365
+
366
+ D2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants' demographic (e.g., country of residence)?
367
+
368
+ No response.
369
+
370
+ D3. Did you discuss whether and how consent was obtained from people whose data you're using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used?
371
+
372
+ No response.
373
+
374
+ D4. Was the data collection protocol approved (or determined exempt) by an ethics review board?
375
+
376
+ No response.
377
+
378
+ D5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data?
379
+
380
+ No response.
2023/A Diffusion Model for Event Skeleton Generation/images.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e6f31629c973a274b426483acc79b687d852fdbc0e05071c384c2d035b728737
3
+ size 364591
2023/A Diffusion Model for Event Skeleton Generation/layout.json ADDED
The diff for this file is too large to render. See raw diff
 
2023/A Formal Perspective on Byte-Pair Encoding/6df33e58-ebbb-41f2-bab6-89afcc9dc353_content_list.json ADDED
The diff for this file is too large to render. See raw diff