Add Batch 789ba3d9-366a-4adc-9727-324a090d290a data
Browse filesThis view is limited to 50 files because it contains too many changes. See raw diff
- .gitattributes +64 -0
- 2025/A Bayesian Optimization Approach to Machine Translation Reranking/e6893c9d-09ab-4c0e-a429-b12ff05203f6_content_list.json +1875 -0
- 2025/A Bayesian Optimization Approach to Machine Translation Reranking/e6893c9d-09ab-4c0e-a429-b12ff05203f6_model.json +0 -0
- 2025/A Bayesian Optimization Approach to Machine Translation Reranking/e6893c9d-09ab-4c0e-a429-b12ff05203f6_origin.pdf +3 -0
- 2025/A Bayesian Optimization Approach to Machine Translation Reranking/full.md +356 -0
- 2025/A Bayesian Optimization Approach to Machine Translation Reranking/images.zip +3 -0
- 2025/A Bayesian Optimization Approach to Machine Translation Reranking/layout.json +0 -0
- 2025/A Cognitive Evaluation Benchmark of Image Reasoning and Description for Large Vision-Language Models/4d3753eb-da4e-4fdc-b38d-da331bf2b58e_content_list.json +0 -0
- 2025/A Cognitive Evaluation Benchmark of Image Reasoning and Description for Large Vision-Language Models/4d3753eb-da4e-4fdc-b38d-da331bf2b58e_model.json +0 -0
- 2025/A Cognitive Evaluation Benchmark of Image Reasoning and Description for Large Vision-Language Models/4d3753eb-da4e-4fdc-b38d-da331bf2b58e_origin.pdf +3 -0
- 2025/A Cognitive Evaluation Benchmark of Image Reasoning and Description for Large Vision-Language Models/full.md +558 -0
- 2025/A Cognitive Evaluation Benchmark of Image Reasoning and Description for Large Vision-Language Models/images.zip +3 -0
- 2025/A Cognitive Evaluation Benchmark of Image Reasoning and Description for Large Vision-Language Models/layout.json +0 -0
- 2025/A Data-Driven Method for Analyzing and Quantifying Lyrics-Dance Motion Relationships/1137dc0d-0b8d-4c1f-bfdb-069a25b57730_content_list.json +1962 -0
- 2025/A Data-Driven Method for Analyzing and Quantifying Lyrics-Dance Motion Relationships/1137dc0d-0b8d-4c1f-bfdb-069a25b57730_model.json +0 -0
- 2025/A Data-Driven Method for Analyzing and Quantifying Lyrics-Dance Motion Relationships/1137dc0d-0b8d-4c1f-bfdb-069a25b57730_origin.pdf +3 -0
- 2025/A Data-Driven Method for Analyzing and Quantifying Lyrics-Dance Motion Relationships/full.md +344 -0
- 2025/A Data-Driven Method for Analyzing and Quantifying Lyrics-Dance Motion Relationships/images.zip +3 -0
- 2025/A Data-Driven Method for Analyzing and Quantifying Lyrics-Dance Motion Relationships/layout.json +0 -0
- 2025/A Distributional Perspective on Word Learning in Neural Language Models/d2c23648-7b19-4afb-a9b9-168fd986a2f1_content_list.json +0 -0
- 2025/A Distributional Perspective on Word Learning in Neural Language Models/d2c23648-7b19-4afb-a9b9-168fd986a2f1_model.json +0 -0
- 2025/A Distributional Perspective on Word Learning in Neural Language Models/d2c23648-7b19-4afb-a9b9-168fd986a2f1_origin.pdf +3 -0
- 2025/A Distributional Perspective on Word Learning in Neural Language Models/full.md +622 -0
- 2025/A Distributional Perspective on Word Learning in Neural Language Models/images.zip +3 -0
- 2025/A Distributional Perspective on Word Learning in Neural Language Models/layout.json +0 -0
- 2025/A Grounded Typology of Word Classes/a9196626-0ba8-4e82-938e-60cbb4572659_content_list.json +0 -0
- 2025/A Grounded Typology of Word Classes/a9196626-0ba8-4e82-938e-60cbb4572659_model.json +0 -0
- 2025/A Grounded Typology of Word Classes/a9196626-0ba8-4e82-938e-60cbb4572659_origin.pdf +3 -0
- 2025/A Grounded Typology of Word Classes/full.md +514 -0
- 2025/A Grounded Typology of Word Classes/images.zip +3 -0
- 2025/A Grounded Typology of Word Classes/layout.json +0 -0
- 2025/A Logical Fallacy-Informed Framework for Argument Generation/ef68dfd3-62d8-4d31-a579-f4bb4e064dce_content_list.json +0 -0
- 2025/A Logical Fallacy-Informed Framework for Argument Generation/ef68dfd3-62d8-4d31-a579-f4bb4e064dce_model.json +0 -0
- 2025/A Logical Fallacy-Informed Framework for Argument Generation/ef68dfd3-62d8-4d31-a579-f4bb4e064dce_origin.pdf +3 -0
- 2025/A Logical Fallacy-Informed Framework for Argument Generation/full.md +505 -0
- 2025/A Logical Fallacy-Informed Framework for Argument Generation/images.zip +3 -0
- 2025/A Logical Fallacy-Informed Framework for Argument Generation/layout.json +0 -0
- 2025/A Mixed-Language Multi-Document News Summarization Dataset and a Graphs-Based Extract-Generate Model/a6e57a98-a206-4f15-a537-4cd8fddecece_content_list.json +1668 -0
- 2025/A Mixed-Language Multi-Document News Summarization Dataset and a Graphs-Based Extract-Generate Model/a6e57a98-a206-4f15-a537-4cd8fddecece_model.json +2224 -0
- 2025/A Mixed-Language Multi-Document News Summarization Dataset and a Graphs-Based Extract-Generate Model/a6e57a98-a206-4f15-a537-4cd8fddecece_origin.pdf +3 -0
- 2025/A Mixed-Language Multi-Document News Summarization Dataset and a Graphs-Based Extract-Generate Model/full.md +317 -0
- 2025/A Mixed-Language Multi-Document News Summarization Dataset and a Graphs-Based Extract-Generate Model/images.zip +3 -0
- 2025/A Mixed-Language Multi-Document News Summarization Dataset and a Graphs-Based Extract-Generate Model/layout.json +0 -0
- 2025/A Multi-modal Large Language Model with Graph-of-Thought for Effective Recommendation/73b2862f-4128-4c3f-916f-50e750cc071e_content_list.json +0 -0
- 2025/A Multi-modal Large Language Model with Graph-of-Thought for Effective Recommendation/73b2862f-4128-4c3f-916f-50e750cc071e_model.json +0 -0
- 2025/A Multi-modal Large Language Model with Graph-of-Thought for Effective Recommendation/73b2862f-4128-4c3f-916f-50e750cc071e_origin.pdf +3 -0
- 2025/A Multi-modal Large Language Model with Graph-of-Thought for Effective Recommendation/full.md +442 -0
- 2025/A Multi-modal Large Language Model with Graph-of-Thought for Effective Recommendation/images.zip +3 -0
- 2025/A Multi-modal Large Language Model with Graph-of-Thought for Effective Recommendation/layout.json +0 -0
- 2025/A Novel Computational Modeling Foundation for Automatic Coherence Assessment/d0cb348e-9496-4e82-aed0-30776483e396_content_list.json +0 -0
.gitattributes
CHANGED
|
@@ -506,3 +506,67 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
| 506 |
2025/Unleashing[[:space:]]Multi-Hop[[:space:]]Reasoning[[:space:]]Potential[[:space:]]in[[:space:]]Large[[:space:]]Language[[:space:]]Models[[:space:]]through[[:space:]]Repetition[[:space:]]of[[:space:]]Misordered[[:space:]]Context/088db04b-e4aa-43ea-badc-5219cd1ee34c_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 507 |
2025/Unlocking[[:space:]]the[[:space:]]Planning[[:space:]]Capabilities[[:space:]]of[[:space:]]Large[[:space:]]Language[[:space:]]Models[[:space:]]with[[:space:]]Maximum[[:space:]]Diversity[[:space:]]Fine-tuning/245c44bb-9120-40a3-ac78-ceceffd2a860_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 508 |
2025/Unmasking[[:space:]]Database[[:space:]]Vulnerabilities_[[:space:]]Zero-Knowledge[[:space:]]Schema[[:space:]]Inference[[:space:]]Attacks[[:space:]]in[[:space:]]Text-to-SQL[[:space:]]Systems/dddac1c3-7d1e-41b9-9290-20b5f12ae3cf_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 506 |
2025/Unleashing[[:space:]]Multi-Hop[[:space:]]Reasoning[[:space:]]Potential[[:space:]]in[[:space:]]Large[[:space:]]Language[[:space:]]Models[[:space:]]through[[:space:]]Repetition[[:space:]]of[[:space:]]Misordered[[:space:]]Context/088db04b-e4aa-43ea-badc-5219cd1ee34c_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 507 |
2025/Unlocking[[:space:]]the[[:space:]]Planning[[:space:]]Capabilities[[:space:]]of[[:space:]]Large[[:space:]]Language[[:space:]]Models[[:space:]]with[[:space:]]Maximum[[:space:]]Diversity[[:space:]]Fine-tuning/245c44bb-9120-40a3-ac78-ceceffd2a860_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 508 |
2025/Unmasking[[:space:]]Database[[:space:]]Vulnerabilities_[[:space:]]Zero-Knowledge[[:space:]]Schema[[:space:]]Inference[[:space:]]Attacks[[:space:]]in[[:space:]]Text-to-SQL[[:space:]]Systems/dddac1c3-7d1e-41b9-9290-20b5f12ae3cf_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 509 |
+
2025/A[[:space:]]Bayesian[[:space:]]Optimization[[:space:]]Approach[[:space:]]to[[:space:]]Machine[[:space:]]Translation[[:space:]]Reranking/e6893c9d-09ab-4c0e-a429-b12ff05203f6_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 510 |
+
2025/A[[:space:]]Cognitive[[:space:]]Evaluation[[:space:]]Benchmark[[:space:]]of[[:space:]]Image[[:space:]]Reasoning[[:space:]]and[[:space:]]Description[[:space:]]for[[:space:]]Large[[:space:]]Vision-Language[[:space:]]Models/4d3753eb-da4e-4fdc-b38d-da331bf2b58e_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 511 |
+
2025/A[[:space:]]Data-Driven[[:space:]]Method[[:space:]]for[[:space:]]Analyzing[[:space:]]and[[:space:]]Quantifying[[:space:]]Lyrics-Dance[[:space:]]Motion[[:space:]]Relationships/1137dc0d-0b8d-4c1f-bfdb-069a25b57730_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 512 |
+
2025/A[[:space:]]Distributional[[:space:]]Perspective[[:space:]]on[[:space:]]Word[[:space:]]Learning[[:space:]]in[[:space:]]Neural[[:space:]]Language[[:space:]]Models/d2c23648-7b19-4afb-a9b9-168fd986a2f1_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 513 |
+
2025/A[[:space:]]Grounded[[:space:]]Typology[[:space:]]of[[:space:]]Word[[:space:]]Classes/a9196626-0ba8-4e82-938e-60cbb4572659_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 514 |
+
2025/A[[:space:]]Logical[[:space:]]Fallacy-Informed[[:space:]]Framework[[:space:]]for[[:space:]]Argument[[:space:]]Generation/ef68dfd3-62d8-4d31-a579-f4bb4e064dce_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 515 |
+
2025/A[[:space:]]Mixed-Language[[:space:]]Multi-Document[[:space:]]News[[:space:]]Summarization[[:space:]]Dataset[[:space:]]and[[:space:]]a[[:space:]]Graphs-Based[[:space:]]Extract-Generate[[:space:]]Model/a6e57a98-a206-4f15-a537-4cd8fddecece_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 516 |
+
2025/A[[:space:]]Multi-modal[[:space:]]Large[[:space:]]Language[[:space:]]Model[[:space:]]with[[:space:]]Graph-of-Thought[[:space:]]for[[:space:]]Effective[[:space:]]Recommendation/73b2862f-4128-4c3f-916f-50e750cc071e_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 517 |
+
2025/A[[:space:]]Novel[[:space:]]Computational[[:space:]]Modeling[[:space:]]Foundation[[:space:]]for[[:space:]]Automatic[[:space:]]Coherence[[:space:]]Assessment/d0cb348e-9496-4e82-aed0-30776483e396_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 518 |
+
2025/A[[:space:]]Picture[[:space:]]is[[:space:]]Worth[[:space:]]A[[:space:]]Thousand[[:space:]]Numbers_[[:space:]]Enabling[[:space:]]LLMs[[:space:]]Reason[[:space:]]about[[:space:]]Time[[:space:]]Series[[:space:]]via[[:space:]]Visualization/9a0045b8-13b9-49d2-9843-05afcacfae53_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 519 |
+
2025/A[[:space:]]Probabilistic[[:space:]]Framework[[:space:]]for[[:space:]]LLM[[:space:]]Hallucination[[:space:]]Detection[[:space:]]via[[:space:]]Belief[[:space:]]Tree[[:space:]]Propagation/328ee7e6-3185-4b21-8fff-21b2ff94779d_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 520 |
+
2025/A[[:space:]]Survey[[:space:]]of[[:space:]]NLP[[:space:]]Progress[[:space:]]in[[:space:]]Sino-Tibetan[[:space:]]Low-Resource[[:space:]]Languages/a910ffcb-0154-46ef-bbf5-d6bf643ac305_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 521 |
+
2025/A[[:space:]]Survey[[:space:]]of[[:space:]]QUD[[:space:]]Models[[:space:]]for[[:space:]]Discourse[[:space:]]Processing/605d7a53-855f-4b6d-94f4-2c405a4d8361_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 522 |
+
2025/A[[:space:]]Systematic[[:space:]]Examination[[:space:]]of[[:space:]]Preference[[:space:]]Learning[[:space:]]through[[:space:]]the[[:space:]]Lens[[:space:]]of[[:space:]]Instruction-Following/107de65f-a4ba-4517-999c-1dc755f8251d_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 523 |
+
2025/A[[:space:]]Template[[:space:]]Is[[:space:]]All[[:space:]]You[[:space:]]Meme/0b80212a-7e17-4ed8-9999-83bdbb00de0a_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 524 |
+
2025/A[[:space:]]Top-down[[:space:]]Graph-based[[:space:]]Tool[[:space:]]for[[:space:]]Modeling[[:space:]]Classical[[:space:]]Semantic[[:space:]]Maps_[[:space:]]A[[:space:]]Case[[:space:]]Study[[:space:]]of[[:space:]]Supplementary[[:space:]]Adverbs/69a59307-fa68-463f-9d78-11c2ed5f1566_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 525 |
+
2025/A[[:space:]]Unified[[:space:]]Supervised[[:space:]]and[[:space:]]Unsupervised[[:space:]]Dialogue[[:space:]]Topic[[:space:]]Segmentation[[:space:]]Framework[[:space:]]Based[[:space:]]on[[:space:]]Utterance[[:space:]]Pair[[:space:]]Modeling/c8e9412f-d083-4dcb-baf3-550817ecaf4b_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 526 |
+
2025/A[[:space:]]Zero-Shot[[:space:]]Open-Vocabulary[[:space:]]Pipeline[[:space:]]for[[:space:]]Dialogue[[:space:]]Understanding/377806de-4c3a-4591-b1dd-37624d381603_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 527 |
+
2025/ACCESS[[:space:]]_[[:space:]]A[[:space:]]Benchmark[[:space:]]for[[:space:]]Abstract[[:space:]]Causal[[:space:]]Event[[:space:]]Discovery[[:space:]]and[[:space:]]Reasoning/dba615fd-e933-40db-b521-4c7b6a606e65_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 528 |
+
2025/ACCORD_[[:space:]]Closing[[:space:]]the[[:space:]]Commonsense[[:space:]]Measurability[[:space:]]Gap/51b70a61-a300-4946-bd40-f5512a80d45c_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 529 |
+
2025/AEGIS2.0_[[:space:]]A[[:space:]]Diverse[[:space:]]AI[[:space:]]Safety[[:space:]]Dataset[[:space:]]and[[:space:]]Risks[[:space:]]Taxonomy[[:space:]]for[[:space:]]Alignment[[:space:]]of[[:space:]]LLM[[:space:]]Guardrails/7f1bb1e8-078c-467b-8db9-dc815a072cda_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 530 |
+
2025/AI-Assisted[[:space:]]Human[[:space:]]Evaluation[[:space:]]of[[:space:]]Machine[[:space:]]Translation/95fd3a44-9967-48e5-be75-6ca32e9fb9d4_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 531 |
+
2025/AI-LieDar[[:space:]]_[[:space:]]Examine[[:space:]]the[[:space:]]Trade-off[[:space:]]Between[[:space:]]Utility[[:space:]]and[[:space:]]Truthfulness[[:space:]]in[[:space:]]LLM[[:space:]]Agents/5ed2dce1-2488-4488-b795-017413758d5a_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 532 |
+
2025/AID_[[:space:]]Adaptive[[:space:]]Integration[[:space:]]of[[:space:]]Detectors[[:space:]]for[[:space:]]Safe[[:space:]]AI[[:space:]]with[[:space:]]Language[[:space:]]Models/cb3f4496-276a-4343-a51b-534f43ecc759_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 533 |
+
2025/ALERT_[[:space:]]An[[:space:]]LLM-powered[[:space:]]Benchmark[[:space:]]for[[:space:]]Automatic[[:space:]]Evaluation[[:space:]]of[[:space:]]Recommendation[[:space:]]Explanations/2419e128-1be2-460d-a529-dcd87660ea56_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 534 |
+
2025/ALPACA[[:space:]]AGAINST[[:space:]]VICUNA_[[:space:]]Using[[:space:]]LLMs[[:space:]]to[[:space:]]Uncover[[:space:]]Memorization[[:space:]]of[[:space:]]LLMs/91cc7a93-b167-42f5-ad1e-629ca3e8c344_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 535 |
+
2025/ALTER_[[:space:]]Augmentation[[:space:]]for[[:space:]]Large-Table-Based[[:space:]]Reasoning/6d5f1f5e-a1b8-46af-af4e-6374855b16d7_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 536 |
+
2025/ALiiCE_[[:space:]]Evaluating[[:space:]]Positional[[:space:]]Fine-grained[[:space:]]Citation[[:space:]]Generation/1bb883fc-7b5e-4ece-ac56-8a8252f476ae_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 537 |
+
2025/ALinFiK_[[:space:]]Learning[[:space:]]to[[:space:]]Approximate[[:space:]]Linearized[[:space:]]Future[[:space:]]Influence[[:space:]]Kernel[[:space:]]for[[:space:]]Scalable[[:space:]]Third-Party[[:space:]]LLM[[:space:]]Data[[:space:]]Valuation/8a4cf635-8867-46fe-ae31-cad9189d319c_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 538 |
+
2025/Active[[:space:]]Few-Shot[[:space:]]Learning[[:space:]]for[[:space:]]Text[[:space:]]Classification/1f71a233-7606-4149-981f-d07f210615f6_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 539 |
+
2025/AdTEC_[[:space:]]A[[:space:]]Unified[[:space:]]Benchmark[[:space:]]for[[:space:]]Evaluating[[:space:]]Text[[:space:]]Quality[[:space:]]in[[:space:]]Search[[:space:]]Engine[[:space:]]Advertising/d08ab2aa-68db-4653-a096-89cf83652356_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 540 |
+
2025/AdaCAD_[[:space:]]Adaptively[[:space:]]Decoding[[:space:]]to[[:space:]]Balance[[:space:]]Conflicts[[:space:]]between[[:space:]]Contextual[[:space:]]and[[:space:]]Parametric[[:space:]]Knowledge/ccbfbf24-6776-43af-9715-49834a214263_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 541 |
+
2025/AdaMergeX_[[:space:]]Cross-Lingual[[:space:]]Transfer[[:space:]]with[[:space:]]Large[[:space:]]Language[[:space:]]Models[[:space:]]via[[:space:]]Adaptive[[:space:]]Adapter[[:space:]]Merging/8b39b034-d377-424a-a0d3-f7fefaf33c8d_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 542 |
+
2025/Adapting[[:space:]]Sentence-level[[:space:]]Automatic[[:space:]]Metrics[[:space:]]for[[:space:]]Document-level[[:space:]]Simplification[[:space:]]Evaluation/62862604-1763-4a1b-a361-a1046b1ed4b2_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 543 |
+
2025/Adaptive[[:space:]]Prompting_[[:space:]]Ad-hoc[[:space:]]Prompt[[:space:]]Composition[[:space:]]for[[:space:]]Social[[:space:]]Bias[[:space:]]Detection/086cc640-00ee-4af3-8143-397cd314de30_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 544 |
+
2025/Advancing[[:space:]]MoE[[:space:]]Efficiency_[[:space:]]A[[:space:]]Collaboration-Constrained[[:space:]]Routing[[:space:]](C2R)[[:space:]]Strategy[[:space:]]for[[:space:]]Better[[:space:]]Expert[[:space:]]Parallelism[[:space:]]Design/72db5d8e-826d-4119-9031-09e237d11b24_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 545 |
+
2025/AdvisorQA_[[:space:]]Towards[[:space:]]Helpful[[:space:]]and[[:space:]]Harmless[[:space:]]Advice-seeking[[:space:]]Question[[:space:]]Answering[[:space:]]with[[:space:]]Collective[[:space:]]Intelligence/066b86cc-06b7-47d7-b463-511f99d7d23c_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 546 |
+
2025/AfriHate_[[:space:]]A[[:space:]]Multilingual[[:space:]]Collection[[:space:]]of[[:space:]]Hate[[:space:]]Speech[[:space:]]and[[:space:]]Abusive[[:space:]]Language[[:space:]]Datasets[[:space:]]for[[:space:]]African[[:space:]]Languages/8aaa1fd4-53be-43e9-8c7d-010a53d982ea_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 547 |
+
2025/Unsupervised[[:space:]]Sentence[[:space:]]Representation[[:space:]]Learning[[:space:]]with[[:space:]]Syntactically[[:space:]]Aligned[[:space:]]Negative[[:space:]]Samples/94d3510f-a722-4eed-ac80-6f8245f9a0bb_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 548 |
+
2025/Unsupervised[[:space:]]Speech-text[[:space:]]word-level[[:space:]]alignment[[:space:]]with[[:space:]]Dynamic[[:space:]]Programming/0ca06319-831b-420c-a33f-29104a939f0c_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 549 |
+
2025/Untangling[[:space:]]Hate[[:space:]]Speech[[:space:]]Definitions_[[:space:]]A[[:space:]]Semantic[[:space:]]Componential[[:space:]]Analysis[[:space:]]Across[[:space:]]Cultures[[:space:]]and[[:space:]]Domains/89730f7e-a958-468e-9163-a052c38a65c7_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 550 |
+
2025/Using[[:space:]]Linguistic[[:space:]]Entrainment[[:space:]]to[[:space:]]Evaluate[[:space:]]Large[[:space:]]Language[[:space:]]Models[[:space:]]for[[:space:]]Use[[:space:]]in[[:space:]]Cognitive[[:space:]]Behavioral[[:space:]]Therapy/ea97bcdb-bd92-4508-8817-a93a694f781b_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 551 |
+
2025/Using[[:space:]]Review[[:space:]]Combination[[:space:]]and[[:space:]]Pseudo-Tokens[[:space:]]for[[:space:]]Aspect[[:space:]]Sentiment[[:space:]]Quad[[:space:]]Prediction/71971f5e-cb3c-4d13-8784-07de97020d31_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 552 |
+
2025/VANE-Bench_[[:space:]]Video[[:space:]]Anomaly[[:space:]]Evaluation[[:space:]]Benchmark[[:space:]]for[[:space:]]Conversational[[:space:]]LMMs/985470d8-dc7b-46b0-a61c-0e82122f0498_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 553 |
+
2025/VLind-Bench_[[:space:]]Measuring[[:space:]]Language[[:space:]]Priors[[:space:]]in[[:space:]]Large[[:space:]]Vision-Language[[:space:]]Models/c8eb222c-51c8-4892-81cf-2aa7658483c1_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 554 |
+
2025/Verifiable[[:space:]]Format[[:space:]]Control[[:space:]]for[[:space:]]Large[[:space:]]Language[[:space:]]Model[[:space:]]Generations/e90f76f8-cb8e-48f2-915c-d926305f175b_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 555 |
+
2025/VisualCoder_[[:space:]]Guiding[[:space:]]Large[[:space:]]Language[[:space:]]Models[[:space:]]in[[:space:]]Code[[:space:]]Execution[[:space:]]with[[:space:]]Fine-grained[[:space:]]Multimodal[[:space:]]Chain-of-Thought[[:space:]]Reasoning/57366c81-59a5-44fb-8ca6-3774d95c04ea_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 556 |
+
2025/Vulnerability[[:space:]]of[[:space:]]Large[[:space:]]Language[[:space:]]Models[[:space:]]to[[:space:]]Output[[:space:]]Prefix[[:space:]]Jailbreaks_[[:space:]]Impact[[:space:]]of[[:space:]]Positions[[:space:]]on[[:space:]]Safety/aa9f8b8b-6aa0-4be8-9858-e3ad701df403_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 557 |
+
2025/WaterSeeker_[[:space:]]Pioneering[[:space:]]Efficient[[:space:]]Detection[[:space:]]of[[:space:]]Watermarked[[:space:]]Segments[[:space:]]in[[:space:]]Large[[:space:]]Documents/e50e1469-e7f9-4242-ab4d-b106353125ff_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 558 |
+
2025/Weight-based[[:space:]]Analysis[[:space:]]of[[:space:]]Detokenization[[:space:]]in[[:space:]]Language[[:space:]]Models_[[:space:]]Understanding[[:space:]]the[[:space:]]First[[:space:]]Stage[[:space:]]of[[:space:]]Inference[[:space:]]Without[[:space:]]Inference/cdc88bef-e6a7-4948-8a97-3db10f21b1ad_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 559 |
+
2025/What[[:space:]]Is[[:space:]]Missing[[:space:]]in[[:space:]]Multilingual[[:space:]]Visual[[:space:]]Reasoning[[:space:]]and[[:space:]]How[[:space:]]to[[:space:]]Fix[[:space:]]It/2868d188-d3d3-49c1-9bda-b43182548b48_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 560 |
+
2025/What[[:space:]]can[[:space:]]Large[[:space:]]Language[[:space:]]Models[[:space:]]Capture[[:space:]]about[[:space:]]Code[[:space:]]Functional[[:space:]]Equivalence_/06b1697f-6f9a-40aa-a1f3-23892379ebdc_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 561 |
+
2025/When[[:space:]]and[[:space:]]How[[:space:]]to[[:space:]]Augment[[:space:]]Your[[:space:]]Input_[[:space:]]Question[[:space:]]Routing[[:space:]]Helps[[:space:]]Balance[[:space:]]the[[:space:]]Accuracy[[:space:]]and[[:space:]]Efficiency[[:space:]]of[[:space:]]Large[[:space:]]Language[[:space:]]Models/e974f6f2-f681-4063-a997-470831287816_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 562 |
+
2025/When[[:space:]]natural[[:space:]]language[[:space:]]is[[:space:]]not[[:space:]]enough_[[:space:]]The[[:space:]]limits[[:space:]]of[[:space:]]in-context[[:space:]]learning[[:space:]]demonstrations[[:space:]]in[[:space:]]multilingual[[:space:]]reasoning/816a45a0-a534-4abe-afde-d8351f506341_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 563 |
+
2025/Where[[:space:]]is[[:space:]]this[[:space:]]coming[[:space:]]from_[[:space:]]Making[[:space:]]groundedness[[:space:]]count[[:space:]]in[[:space:]]the[[:space:]]evaluation[[:space:]]of[[:space:]]Document[[:space:]]VQA[[:space:]]models/9bed4d5a-938b-4fc1-8d0f-1e227797466c_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 564 |
+
2025/WordGame_[[:space:]]Efficient[[:space:]]&[[:space:]]Effective[[:space:]]LLM[[:space:]]Jailbreak[[:space:]]via[[:space:]]Simultaneous[[:space:]]Obfuscation[[:space:]]in[[:space:]]Query[[:space:]]and[[:space:]]Response/fc40dd1e-fb68-4d71-8946-7bd6e24c436b_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 565 |
+
2025/WorldMedQA-V_[[:space:]]a[[:space:]]multilingual,[[:space:]]multimodal[[:space:]]medical[[:space:]]examination[[:space:]]dataset[[:space:]]for[[:space:]]multimodal[[:space:]]language[[:space:]]models[[:space:]]evaluation/ee572d32-da4f-4ee0-ac79-ddd24a16894b_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 566 |
+
2025/XAMPLER_[[:space:]]Learning[[:space:]]to[[:space:]]Retrieve[[:space:]]Cross-Lingual[[:space:]]In-Context[[:space:]]Examples/fd93433e-4c2b-4fe4-bf90-89964bd0634f_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 567 |
+
2025/Zero-Shot[[:space:]]Keyphrase[[:space:]]Generation_[[:space:]]Investigating[[:space:]]Specialized[[:space:]]Instructions[[:space:]]and[[:space:]]Multi-sample[[:space:]]Aggregation[[:space:]]on[[:space:]]Large[[:space:]]Language[[:space:]]Models/8bace04c-bf3d-46e5-992a-ec544d860a8f_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 568 |
+
2025/Zero-Shot[[:space:]]Strategies[[:space:]]for[[:space:]]Length-Controllable[[:space:]]Summarization/aa504389-b4c8-4da9-b091-e852c5255e02_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 569 |
+
2025/kNN[[:space:]]For[[:space:]]Whisper[[:space:]]And[[:space:]]Its[[:space:]]Effect[[:space:]]On[[:space:]]Bias[[:space:]]And[[:space:]]Speaker[[:space:]]Adaptation/9441e04e-d01d-412d-be5d-8f4e4ea847ca_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 570 |
+
2025/“All[[:space:]]that[[:space:]]Glitters”_[[:space:]]Techniques[[:space:]]for[[:space:]]Evaluations[[:space:]]with[[:space:]]Unreliable[[:space:]]Model[[:space:]]and[[:space:]]Human[[:space:]]Annotations/aa5612b4-b8ab-4cd3-8c26-056afdba757a_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 571 |
+
2025/“Women[[:space:]]do[[:space:]]not[[:space:]]have[[:space:]]heart[[:space:]]attacks!”[[:space:]]Gender[[:space:]]Biases[[:space:]]in[[:space:]]Automatically[[:space:]]Generated[[:space:]]Clinical[[:space:]]Cases[[:space:]]in[[:space:]]French/48ac4547-1684-49a8-b614-65272eb49f74_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 572 |
+
2025/𝒮2IT_[[:space:]]Stepwise[[:space:]]Syntax[[:space:]]Integration[[:space:]]Tuning[[:space:]]for[[:space:]]Large[[:space:]]Language[[:space:]]Models[[:space:]]in[[:space:]]Aspect[[:space:]]Sentiment[[:space:]]Quad[[:space:]]Prediction/b7bf8615-4c9a-42fc-bd3e-0d822d42f1ab_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
2025/A Bayesian Optimization Approach to Machine Translation Reranking/e6893c9d-09ab-4c0e-a429-b12ff05203f6_content_list.json
ADDED
|
@@ -0,0 +1,1875 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[
|
| 2 |
+
{
|
| 3 |
+
"type": "text",
|
| 4 |
+
"text": "A Bayesian Optimization Approach to Machine Translation Reranking",
|
| 5 |
+
"text_level": 1,
|
| 6 |
+
"bbox": [
|
| 7 |
+
131,
|
| 8 |
+
89,
|
| 9 |
+
867,
|
| 10 |
+
111
|
| 11 |
+
],
|
| 12 |
+
"page_idx": 0
|
| 13 |
+
},
|
| 14 |
+
{
|
| 15 |
+
"type": "text",
|
| 16 |
+
"text": "Julius Cheng<sup>1</sup> Maike Züflé<sup>2</sup> Vilém Zouhar<sup>3</sup> Andreas Vlachos<sup>1</sup>",
|
| 17 |
+
"bbox": [
|
| 18 |
+
211,
|
| 19 |
+
136,
|
| 20 |
+
791,
|
| 21 |
+
154
|
| 22 |
+
],
|
| 23 |
+
"page_idx": 0
|
| 24 |
+
},
|
| 25 |
+
{
|
| 26 |
+
"type": "text",
|
| 27 |
+
"text": "$^{1}$ University of Cambridge $^{2}$ Karlsruhe Institute of Technology $^{3}$ ETH Zürich {jncc3, av308}@cam.ac.uk maize.zuefle@kit.edu vzouhar@ethz.ch",
|
| 28 |
+
"bbox": [
|
| 29 |
+
124,
|
| 30 |
+
167,
|
| 31 |
+
877,
|
| 32 |
+
200
|
| 33 |
+
],
|
| 34 |
+
"page_idx": 0
|
| 35 |
+
},
|
| 36 |
+
{
|
| 37 |
+
"type": "text",
|
| 38 |
+
"text": "Abstract",
|
| 39 |
+
"text_level": 1,
|
| 40 |
+
"bbox": [
|
| 41 |
+
260,
|
| 42 |
+
252,
|
| 43 |
+
339,
|
| 44 |
+
266
|
| 45 |
+
],
|
| 46 |
+
"page_idx": 0
|
| 47 |
+
},
|
| 48 |
+
{
|
| 49 |
+
"type": "text",
|
| 50 |
+
"text": "Reranking, or scoring a list of prediction candidates from a machine translation system with an external scoring model and returning the highest-scoring candidate, remains a simple and effective method for improving prediction quality. However, reranking with high quality scoring models can add substantial computational cost to the translation pipeline, which we address in this work by framing list reranking as a Bayesian optimization (BayesOpt) problem over the candidate list, where unknown scores are modeled with a Gaussian process. This algorithm scores candidates iteratively, choosing next candidates by balancing between exploration, choosing to score those that differ from candidates already scored, and exploitation, choosing to score those that resemble high-scoring candidates. This procedure finds high-scoring candidates while scoring only a fraction of the candidates list; given candidate lists of 200 random samples (before dedduplication), our method achieves the same CometKiwi score using only 70 scoring evaluations on average compared to scoring a random subset of 180 candidates. We also propose multi-fidelity BayesOpt for list reranking, where scores obtained from a noisier but cheaper proxy scoring model are incorporated into the search process. We show that well-trained distilled proxy scorers can further improve the performance of BayesOpt.",
|
| 51 |
+
"bbox": [
|
| 52 |
+
144,
|
| 53 |
+
281,
|
| 54 |
+
460,
|
| 55 |
+
721
|
| 56 |
+
],
|
| 57 |
+
"page_idx": 0
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"type": "text",
|
| 61 |
+
"text": "1 Introduction",
|
| 62 |
+
"text_level": 1,
|
| 63 |
+
"bbox": [
|
| 64 |
+
114,
|
| 65 |
+
746,
|
| 66 |
+
258,
|
| 67 |
+
760
|
| 68 |
+
],
|
| 69 |
+
"page_idx": 0
|
| 70 |
+
},
|
| 71 |
+
{
|
| 72 |
+
"type": "text",
|
| 73 |
+
"text": "Reranking is a framework for prediction where probabilistic generator model produces a list of candidates, and a separate evaluator or scoring model produces scores for each of the candidates which are used to determine the final prediction. Reranking has a long history in natural language processing for sequential prediction problems such as dependency parsing (Collins and Koo, 2005;",
|
| 74 |
+
"bbox": [
|
| 75 |
+
112,
|
| 76 |
+
771,
|
| 77 |
+
487,
|
| 78 |
+
901
|
| 79 |
+
],
|
| 80 |
+
"page_idx": 0
|
| 81 |
+
},
|
| 82 |
+
{
|
| 83 |
+
"type": "image",
|
| 84 |
+
"img_path": "images/9114944e53665f6d01fb4e197f796eebce5b94a6902102877600619ae0cf753d.jpg",
|
| 85 |
+
"image_caption": [],
|
| 86 |
+
"image_footnote": [],
|
| 87 |
+
"bbox": [
|
| 88 |
+
510,
|
| 89 |
+
247,
|
| 90 |
+
884,
|
| 91 |
+
349
|
| 92 |
+
],
|
| 93 |
+
"page_idx": 0
|
| 94 |
+
},
|
| 95 |
+
{
|
| 96 |
+
"type": "image",
|
| 97 |
+
"img_path": "images/2b6359c7f5b714dbb109abaf750827612f84568f73f9163b30e1654fafac843a.jpg",
|
| 98 |
+
"image_caption": [
|
| 99 |
+
"Figure 1: A machine translation system generates candidates Aa, Bb, Cc, Dd, and Ee. The goal of BayesOpt is to find the highest scoring candidate with fewer scoring calls. An acquisition function selects the next candidate to score repeatedly until budget is reached, and the candidate with the highest score so far is returned."
|
| 100 |
+
],
|
| 101 |
+
"image_footnote": [],
|
| 102 |
+
"bbox": [
|
| 103 |
+
509,
|
| 104 |
+
353,
|
| 105 |
+
884,
|
| 106 |
+
448
|
| 107 |
+
],
|
| 108 |
+
"page_idx": 0
|
| 109 |
+
},
|
| 110 |
+
{
|
| 111 |
+
"type": "text",
|
| 112 |
+
"text": "Charniak and Johnson, 2005) and language modeling problems such as summarization (Ravaut et al., 2022) and machine translation (MT; Fernandes et al., 2022).",
|
| 113 |
+
"bbox": [
|
| 114 |
+
507,
|
| 115 |
+
576,
|
| 116 |
+
882,
|
| 117 |
+
640
|
| 118 |
+
],
|
| 119 |
+
"page_idx": 0
|
| 120 |
+
},
|
| 121 |
+
{
|
| 122 |
+
"type": "text",
|
| 123 |
+
"text": "The quality of models for automatic MT evaluation has surged in recent years due to innovations in neural network architecture (Rei et al., 2020; Juraska et al., 2023; Sellam et al., 2020) as well as the abundance of training data (Freitag et al., 2023b; Kocmi et al., 2024a). These evaluation models are often repurposed for reranking to further improve the performance of an MT system. For instance, in the WMT 2024 shared task (Kocmi et al., 2024a), 5 out of 19 systems, including the overall best submission (Rei et al., 2024) use reranking with Comet models (Rei et al., 2020) and/or minimum Bayes risk decoding (MBR; Eikema and Aziz, 2020), which can be interpreted as a form of reranking. Prior to the application of automatic evaluation metrics to reranking, other scoring methods have been proposed, including discriminatively",
|
| 124 |
+
"bbox": [
|
| 125 |
+
507,
|
| 126 |
+
645,
|
| 127 |
+
884,
|
| 128 |
+
919
|
| 129 |
+
],
|
| 130 |
+
"page_idx": 0
|
| 131 |
+
},
|
| 132 |
+
{
|
| 133 |
+
"type": "page_footnote",
|
| 134 |
+
"text": "<sup>0</sup>Paper code: github.com/juliusc/bayesopt_beranking",
|
| 135 |
+
"bbox": [
|
| 136 |
+
139,
|
| 137 |
+
903,
|
| 138 |
+
465,
|
| 139 |
+
919
|
| 140 |
+
],
|
| 141 |
+
"page_idx": 0
|
| 142 |
+
},
|
| 143 |
+
{
|
| 144 |
+
"type": "page_number",
|
| 145 |
+
"text": "2849",
|
| 146 |
+
"bbox": [
|
| 147 |
+
480,
|
| 148 |
+
927,
|
| 149 |
+
519,
|
| 150 |
+
940
|
| 151 |
+
],
|
| 152 |
+
"page_idx": 0
|
| 153 |
+
},
|
| 154 |
+
{
|
| 155 |
+
"type": "footer",
|
| 156 |
+
"text": "Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies",
|
| 157 |
+
"bbox": [
|
| 158 |
+
16,
|
| 159 |
+
945,
|
| 160 |
+
976,
|
| 161 |
+
958
|
| 162 |
+
],
|
| 163 |
+
"page_idx": 0
|
| 164 |
+
},
|
| 165 |
+
{
|
| 166 |
+
"type": "footer",
|
| 167 |
+
"text": "(Volume 1: Long Papers), pages 2849-2862",
|
| 168 |
+
"bbox": [
|
| 169 |
+
364,
|
| 170 |
+
958,
|
| 171 |
+
633,
|
| 172 |
+
971
|
| 173 |
+
],
|
| 174 |
+
"page_idx": 0
|
| 175 |
+
},
|
| 176 |
+
{
|
| 177 |
+
"type": "footer",
|
| 178 |
+
"text": "April 29 - May 4, 2025 ©2025 Association for Computational Linguistics",
|
| 179 |
+
"bbox": [
|
| 180 |
+
273,
|
| 181 |
+
972,
|
| 182 |
+
722,
|
| 183 |
+
985
|
| 184 |
+
],
|
| 185 |
+
"page_idx": 0
|
| 186 |
+
},
|
| 187 |
+
{
|
| 188 |
+
"type": "text",
|
| 189 |
+
"text": "trained classifiers (Lee et al., 2021; Bhattacharyya et al., 2021) and noisy channel decoding (Yee et al., 2019).",
|
| 190 |
+
"bbox": [
|
| 191 |
+
112,
|
| 192 |
+
84,
|
| 193 |
+
487,
|
| 194 |
+
131
|
| 195 |
+
],
|
| 196 |
+
"page_idx": 1
|
| 197 |
+
},
|
| 198 |
+
{
|
| 199 |
+
"type": "text",
|
| 200 |
+
"text": "So, while LMs for MT generation for greatly improved in recent years, scoring models have seen a commensurate increase in quality (Zerva et al., 2022), and thus reranking remains relevant method for improving translation quality. However, the scoring models have also grown dramatically in size, increasing the computational requirements for reranking.",
|
| 201 |
+
"bbox": [
|
| 202 |
+
112,
|
| 203 |
+
135,
|
| 204 |
+
487,
|
| 205 |
+
263
|
| 206 |
+
],
|
| 207 |
+
"page_idx": 1
|
| 208 |
+
},
|
| 209 |
+
{
|
| 210 |
+
"type": "text",
|
| 211 |
+
"text": "In this work, we address the computational cost of reranking by framing it as a search problem over the list of candidates. The goal of search in this setting is to find high-scoring candidates in a small number of steps, thereby avoiding the cost of scoring the full list. Our proposed algorithm uses Gaussian processes to model uncertainty about unseen scores and Bayesian optimization (BayesOpt; Shahriari et al., 2016) to choose which candidates to score next.",
|
| 212 |
+
"bbox": [
|
| 213 |
+
112,
|
| 214 |
+
267,
|
| 215 |
+
487,
|
| 216 |
+
426
|
| 217 |
+
],
|
| 218 |
+
"page_idx": 1
|
| 219 |
+
},
|
| 220 |
+
{
|
| 221 |
+
"type": "text",
|
| 222 |
+
"text": "GPs are flexible priors over functions which are able to model the complex and nonlinear relationship between each candidate and its score. GPs make very few assumptions about the distribution and base their predictions are mostly on observed points, which enables them to easily adapt to different candidate lists across translation instances. BayesOpt is a sequential black-box optimization method that uses the posterior mean and variance of unobserved data points to decide which points to evaluate next.",
|
| 223 |
+
"bbox": [
|
| 224 |
+
112,
|
| 225 |
+
430,
|
| 226 |
+
487,
|
| 227 |
+
606
|
| 228 |
+
],
|
| 229 |
+
"page_idx": 1
|
| 230 |
+
},
|
| 231 |
+
{
|
| 232 |
+
"type": "text",
|
| 233 |
+
"text": "We apply BayesOpt and GPs (BayesOpt+GP) to MT list reranking in a straightforward manner and show that it obtains close to the maximum achievable score with only a fraction of score evaluations. For example, the maximal obtainable score across 200 randomly sampled candidates on our test set is 0.8216 CometKiwi; our method achieves 0.8210 with 70 score evaluations on average, while scoring 70 random candidates attains 0.8149, a difference of 0.0061 which is likely to be human-detectable according to (Kocmi et al., 2024b). We also propose a number of search-based baselines which outperform random selection, all of which are outperformed by BayesOpt+GP.",
|
| 234 |
+
"bbox": [
|
| 235 |
+
112,
|
| 236 |
+
611,
|
| 237 |
+
487,
|
| 238 |
+
835
|
| 239 |
+
],
|
| 240 |
+
"page_idx": 1
|
| 241 |
+
},
|
| 242 |
+
{
|
| 243 |
+
"type": "text",
|
| 244 |
+
"text": "Then, building upon previous works that use a faster but noisier proxy scoring function to prune the candidate list (Fernandes et al., 2022; Eikema and Aziz, 2022), we propose a multi-fidelity extension to BayesOpt which incorporates proxy scores",
|
| 245 |
+
"bbox": [
|
| 246 |
+
112,
|
| 247 |
+
839,
|
| 248 |
+
487,
|
| 249 |
+
919
|
| 250 |
+
],
|
| 251 |
+
"page_idx": 1
|
| 252 |
+
},
|
| 253 |
+
{
|
| 254 |
+
"type": "text",
|
| 255 |
+
"text": "to improve estimation. This is related in motivation to coarse-to-fine methods (Petrov, 2011) and model cascading (Chen et al., 2023), where the use of a faster proxy model reduces the use of the main model. In our multi-fidelity experiments, we find that smaller proxy scoring models distilled from the main model can assist BayesOpt+GP in finding high-scoring candidates earlier.",
|
| 256 |
+
"bbox": [
|
| 257 |
+
507,
|
| 258 |
+
84,
|
| 259 |
+
882,
|
| 260 |
+
212
|
| 261 |
+
],
|
| 262 |
+
"page_idx": 1
|
| 263 |
+
},
|
| 264 |
+
{
|
| 265 |
+
"type": "text",
|
| 266 |
+
"text": "2 Background",
|
| 267 |
+
"text_level": 1,
|
| 268 |
+
"bbox": [
|
| 269 |
+
507,
|
| 270 |
+
225,
|
| 271 |
+
650,
|
| 272 |
+
242
|
| 273 |
+
],
|
| 274 |
+
"page_idx": 1
|
| 275 |
+
},
|
| 276 |
+
{
|
| 277 |
+
"type": "text",
|
| 278 |
+
"text": "2.1 Translation generation and reranking",
|
| 279 |
+
"text_level": 1,
|
| 280 |
+
"bbox": [
|
| 281 |
+
507,
|
| 282 |
+
252,
|
| 283 |
+
852,
|
| 284 |
+
268
|
| 285 |
+
],
|
| 286 |
+
"page_idx": 1
|
| 287 |
+
},
|
| 288 |
+
{
|
| 289 |
+
"type": "text",
|
| 290 |
+
"text": "In a typical machine translation setting, a conditional language model (LM) is trained to model the probability of the next token $y_{t}$ given a source sentence $x$ and previous tokens: $p(y_{t}|x,y_{1},\\dots,y_{t - 1})$ . These probabilities can be autoregressively combined to model a sequence probability $p(y|x)$ . Usually, beam search is used to search for a $y$ which maximizes log probability combined with a length normalization objective (Wu et al., 2016).",
|
| 291 |
+
"bbox": [
|
| 292 |
+
507,
|
| 293 |
+
273,
|
| 294 |
+
884,
|
| 295 |
+
418
|
| 296 |
+
],
|
| 297 |
+
"page_idx": 1
|
| 298 |
+
},
|
| 299 |
+
{
|
| 300 |
+
"type": "text",
|
| 301 |
+
"text": "In a basic list reranking setting, given $x$ , the LM is used to generate a candidate list $\\mathcal{C}_x = [y_1,\\dots,y_n]$ with a decoding algorithm such as beam search or ancestral sampling. A scoring function $s(x,y_i)$ is then applied to each $y_{i}\\in \\mathcal{C}_{x}$ , and the best scoring sequence $\\arg \\max_{y_i\\in \\mathcal{C}_x}s(x,y_i)$ is returned. A common choice of scoring function is a quality estimation (QE) model which directly predicts a scalar value representing the quality.",
|
| 302 |
+
"bbox": [
|
| 303 |
+
507,
|
| 304 |
+
419,
|
| 305 |
+
882,
|
| 306 |
+
564
|
| 307 |
+
],
|
| 308 |
+
"page_idx": 1
|
| 309 |
+
},
|
| 310 |
+
{
|
| 311 |
+
"type": "text",
|
| 312 |
+
"text": "Reranking with high-quality evaluation metrics has been shown to be highly effective at improving translation output (Freitag et al., 2022), though it can skew results when the same metric is also used for evaluation (Kocmi et al., 2024a). Reranking performance improves as the number of candidates increases (Vernikos and Popescu-Belis, 2024) and when multiple scoring metrics are combined to form a stronger prediction (Fernandes et al., 2022).",
|
| 313 |
+
"bbox": [
|
| 314 |
+
507,
|
| 315 |
+
565,
|
| 316 |
+
882,
|
| 317 |
+
709
|
| 318 |
+
],
|
| 319 |
+
"page_idx": 1
|
| 320 |
+
},
|
| 321 |
+
{
|
| 322 |
+
"type": "text",
|
| 323 |
+
"text": "Reranking adds significant computational costs to prediction and may be prohibitive to use at test time, but it can be used to benefit LM training instead of test time prediction; high-quality predictions obtained from reranking can be used for knowledge distillation (Wang et al., 2024) and self-training (Finkelstein et al., 2024). Such methods can improve the performance of an MT system without additional costs during test time.",
|
| 324 |
+
"bbox": [
|
| 325 |
+
507,
|
| 326 |
+
709,
|
| 327 |
+
882,
|
| 328 |
+
853
|
| 329 |
+
],
|
| 330 |
+
"page_idx": 1
|
| 331 |
+
},
|
| 332 |
+
{
|
| 333 |
+
"type": "text",
|
| 334 |
+
"text": "Previous work on efficient reranking for MT is relatively limited. Fernandes et al. (2022) and Eikema and Aziz (2022) perform a two-stage reranking by first pruning with a faster and noisier",
|
| 335 |
+
"bbox": [
|
| 336 |
+
507,
|
| 337 |
+
854,
|
| 338 |
+
882,
|
| 339 |
+
917
|
| 340 |
+
],
|
| 341 |
+
"page_idx": 1
|
| 342 |
+
},
|
| 343 |
+
{
|
| 344 |
+
"type": "page_number",
|
| 345 |
+
"text": "2850",
|
| 346 |
+
"bbox": [
|
| 347 |
+
480,
|
| 348 |
+
927,
|
| 349 |
+
521,
|
| 350 |
+
940
|
| 351 |
+
],
|
| 352 |
+
"page_idx": 1
|
| 353 |
+
},
|
| 354 |
+
{
|
| 355 |
+
"type": "text",
|
| 356 |
+
"text": "scoring function to a fixed size before evaluating the target score. There has been recent interest in efficient approximations for MBR (Cheng and Vlachos, 2023; Deguchi et al., 2024; Trabelsi et al., 2024; Vamvas and Sennrich, 2024), but these methods are not applicable to general scoring functions. (Singhal et al., 2023) propose to represent the candidate space compactly in a lattice over which a token-level reranker can efficiently score many candidates. In this work, we attempt to address a more general setting: the reranking of candidate lists with arbitrary black-box scoring functions.",
|
| 357 |
+
"bbox": [
|
| 358 |
+
112,
|
| 359 |
+
84,
|
| 360 |
+
489,
|
| 361 |
+
277
|
| 362 |
+
],
|
| 363 |
+
"page_idx": 2
|
| 364 |
+
},
|
| 365 |
+
{
|
| 366 |
+
"type": "text",
|
| 367 |
+
"text": "2.2 Bayesian optimization with Gaussian process prior",
|
| 368 |
+
"text_level": 1,
|
| 369 |
+
"bbox": [
|
| 370 |
+
112,
|
| 371 |
+
294,
|
| 372 |
+
374,
|
| 373 |
+
326
|
| 374 |
+
],
|
| 375 |
+
"page_idx": 2
|
| 376 |
+
},
|
| 377 |
+
{
|
| 378 |
+
"type": "text",
|
| 379 |
+
"text": "Bayesian optimization is a sequential algorithm for optimizing a black-box function $f$ . $f$ is assumed to be drawn from a prior distribution over functions. The main loop of BayesOpt is as follows: given a set of (possibly noisy) observations of $f(a_{1}),\\dots,f(a_{i})$ , the prior distribution over $f$ is updated to a posterior distribution with Bayes theorem. An acquisition function determines a query point $a_{i + 1}$ at which to evaluate $f$ next. $f(a_{i + 1})$ is evaluated and added to the set of observations. This repeats until a stopping criteria is reached. The principal design choices in BayesOpt are the prior distribution of $f$ and the acquisition function.",
|
| 380 |
+
"bbox": [
|
| 381 |
+
112,
|
| 382 |
+
334,
|
| 383 |
+
489,
|
| 384 |
+
543
|
| 385 |
+
],
|
| 386 |
+
"page_idx": 2
|
| 387 |
+
},
|
| 388 |
+
{
|
| 389 |
+
"type": "text",
|
| 390 |
+
"text": "A common choice of prior is the Gaussian process, which assumes that any subset of points $f(a_{1}), \\ldots, f(a_{i})$ are drawn jointly from a multivariate Gaussian distribution $\\mathcal{N}(\\mu, \\mathcal{K})$ , where $\\mathcal{K}$ is the covariance matrix defined by a kernel function such as the radial basis function kernel (RBF). RBFs define the covariance of two points $a$ and $a'$ as:",
|
| 391 |
+
"bbox": [
|
| 392 |
+
112,
|
| 393 |
+
545,
|
| 394 |
+
489,
|
| 395 |
+
657
|
| 396 |
+
],
|
| 397 |
+
"page_idx": 2
|
| 398 |
+
},
|
| 399 |
+
{
|
| 400 |
+
"type": "equation",
|
| 401 |
+
"text": "\n$$\n\\mathcal {K} _ {\\mathrm {R B F}} (a, a ^ {\\prime}) = \\exp \\left(- \\frac {| | a - a ^ {\\prime} | | ^ {2}}{2 w ^ {2}}\\right), \\quad (1)\n$$\n",
|
| 402 |
+
"text_format": "latex",
|
| 403 |
+
"bbox": [
|
| 404 |
+
159,
|
| 405 |
+
673,
|
| 406 |
+
487,
|
| 407 |
+
709
|
| 408 |
+
],
|
| 409 |
+
"page_idx": 2
|
| 410 |
+
},
|
| 411 |
+
{
|
| 412 |
+
"type": "text",
|
| 413 |
+
"text": "where $w$ is the bandwidth hyperparameter which determines scaling. The choice of kernel dictates prior assumptions about the shape of $f$ ; with RBF, points that are closer in Euclidean space have larger covariance. RBFs are a popular choice of kernel due their ability to adapt to complex nonlinear functions.",
|
| 414 |
+
"bbox": [
|
| 415 |
+
112,
|
| 416 |
+
724,
|
| 417 |
+
489,
|
| 418 |
+
834
|
| 419 |
+
],
|
| 420 |
+
"page_idx": 2
|
| 421 |
+
},
|
| 422 |
+
{
|
| 423 |
+
"type": "text",
|
| 424 |
+
"text": "The assumption that $f(a_{1}), \\ldots, f(a_{i})$ are jointly Gaussian gives rise to a convenient posterior distribution. Given a vector of observed data points $\\mathbf{a}$ and their observed values $f(\\mathbf{a})$ , the posterior mean $\\mu_{a}$ and variance $\\sigma_{a}$ of a point $a$ are given by the",
|
| 425 |
+
"bbox": [
|
| 426 |
+
112,
|
| 427 |
+
838,
|
| 428 |
+
489,
|
| 429 |
+
920
|
| 430 |
+
],
|
| 431 |
+
"page_idx": 2
|
| 432 |
+
},
|
| 433 |
+
{
|
| 434 |
+
"type": "text",
|
| 435 |
+
"text": "conditional multivariate Gaussian distribution:",
|
| 436 |
+
"bbox": [
|
| 437 |
+
507,
|
| 438 |
+
84,
|
| 439 |
+
855,
|
| 440 |
+
99
|
| 441 |
+
],
|
| 442 |
+
"page_idx": 2
|
| 443 |
+
},
|
| 444 |
+
{
|
| 445 |
+
"type": "equation",
|
| 446 |
+
"text": "\n$$\n\\mu_ {a} = \\mu + \\mathcal {K} (a, \\mathbf {a}) \\left(\\mathcal {K} (\\mathbf {a}, \\mathbf {a}) + \\sigma^ {2} I\\right) ^ {- 1} f (\\mathbf {a}) \\tag {2}\n$$\n",
|
| 447 |
+
"text_format": "latex",
|
| 448 |
+
"bbox": [
|
| 449 |
+
526,
|
| 450 |
+
112,
|
| 451 |
+
882,
|
| 452 |
+
131
|
| 453 |
+
],
|
| 454 |
+
"page_idx": 2
|
| 455 |
+
},
|
| 456 |
+
{
|
| 457 |
+
"type": "equation",
|
| 458 |
+
"text": "\n$$\n\\begin{array}{l} \\sigma_ {a} = \\mathcal {K} (a, a) + \\sigma^ {2} - \\tag {3} \\\\ \\mathcal {K} (a, \\mathbf {a}) (\\mathcal {K} (\\mathbf {a}, \\mathbf {a}) + \\sigma^ {2} I) ^ {- 1} \\mathcal {K} (\\mathbf {a}, a) \\\\ \\end{array}\n$$\n",
|
| 459 |
+
"text_format": "latex",
|
| 460 |
+
"bbox": [
|
| 461 |
+
527,
|
| 462 |
+
135,
|
| 463 |
+
882,
|
| 464 |
+
173
|
| 465 |
+
],
|
| 466 |
+
"page_idx": 2
|
| 467 |
+
},
|
| 468 |
+
{
|
| 469 |
+
"type": "text",
|
| 470 |
+
"text": "where $\\mu$ is the unconditional mean of the distribution, $\\sigma^2$ is a constant Gaussian noise on observations, $I$ is the identity matrix, and $\\mathcal{K}$ here returns elementwise kernel values when given vector arguments.",
|
| 471 |
+
"bbox": [
|
| 472 |
+
507,
|
| 473 |
+
186,
|
| 474 |
+
884,
|
| 475 |
+
265
|
| 476 |
+
],
|
| 477 |
+
"page_idx": 2
|
| 478 |
+
},
|
| 479 |
+
{
|
| 480 |
+
"type": "text",
|
| 481 |
+
"text": "The acquisition function in BayesOpt is the strategy for selecting the next point to evaluate in the optimization process. Acquisition functions can seek the highest expected improvement (EI; Mockus, 1974), an upper confidence bound if the scores are noisy (Srinivas et al., 2009), or information gain (Hennig and Schuler, 2011). We use EI, defined as:",
|
| 482 |
+
"bbox": [
|
| 483 |
+
507,
|
| 484 |
+
267,
|
| 485 |
+
884,
|
| 486 |
+
379
|
| 487 |
+
],
|
| 488 |
+
"page_idx": 2
|
| 489 |
+
},
|
| 490 |
+
{
|
| 491 |
+
"type": "equation",
|
| 492 |
+
"text": "\n$$\n\\alpha (a) = \\mathbb {E} [ \\max (f (a) - f (a ^ {+}), 0) ], \\tag {4}\n$$\n",
|
| 493 |
+
"text_format": "latex",
|
| 494 |
+
"bbox": [
|
| 495 |
+
563,
|
| 496 |
+
393,
|
| 497 |
+
882,
|
| 498 |
+
411
|
| 499 |
+
],
|
| 500 |
+
"page_idx": 2
|
| 501 |
+
},
|
| 502 |
+
{
|
| 503 |
+
"type": "text",
|
| 504 |
+
"text": "where $a^+$ is the location of the current best observation. When $f$ is Gaussian and there is no observation noise, this has the following closed-form solution (Jones, 2001):",
|
| 505 |
+
"bbox": [
|
| 506 |
+
507,
|
| 507 |
+
424,
|
| 508 |
+
884,
|
| 509 |
+
488
|
| 510 |
+
],
|
| 511 |
+
"page_idx": 2
|
| 512 |
+
},
|
| 513 |
+
{
|
| 514 |
+
"type": "equation",
|
| 515 |
+
"text": "\n$$\n\\alpha (a) = \\sigma_ {a} (z \\cdot \\operatorname {c d f} (z) + \\operatorname {p d f} (z)), \\tag {5}\n$$\n",
|
| 516 |
+
"text_format": "latex",
|
| 517 |
+
"bbox": [
|
| 518 |
+
569,
|
| 519 |
+
502,
|
| 520 |
+
882,
|
| 521 |
+
520
|
| 522 |
+
],
|
| 523 |
+
"page_idx": 2
|
| 524 |
+
},
|
| 525 |
+
{
|
| 526 |
+
"type": "text",
|
| 527 |
+
"text": "where $z = \\frac{f(a^{+}) - \\mu_{a}}{\\sigma_{a}}$ , and cdf, pdf are the Gaussian cumulative distribution function and probability density function, respectively. EI encourages both exploration of uncertain points and exploitation of high-scoring points; the quantity in Equation 5 can be increased by increasing $\\mu_{a}$ or $\\sigma_{a}$ .",
|
| 528 |
+
"bbox": [
|
| 529 |
+
507,
|
| 530 |
+
533,
|
| 531 |
+
884,
|
| 532 |
+
632
|
| 533 |
+
],
|
| 534 |
+
"page_idx": 2
|
| 535 |
+
},
|
| 536 |
+
{
|
| 537 |
+
"type": "text",
|
| 538 |
+
"text": "The generality of BayesOpt and modeling freedom enjoyed by GPs make them suitable for a great variety of tasks, including spatial monitoring (Krause et al., 2008) and hyperparameter optimisation (Bergstra et al., 2011). GPs have been applied to text regression tasks (Beck et al., 2013, 2014; Beck and Cohn, 2017), but they are not as well-studied in NLP compared to many other domains.",
|
| 539 |
+
"bbox": [
|
| 540 |
+
507,
|
| 541 |
+
633,
|
| 542 |
+
882,
|
| 543 |
+
760
|
| 544 |
+
],
|
| 545 |
+
"page_idx": 2
|
| 546 |
+
},
|
| 547 |
+
{
|
| 548 |
+
"type": "text",
|
| 549 |
+
"text": "3 Methods",
|
| 550 |
+
"text_level": 1,
|
| 551 |
+
"bbox": [
|
| 552 |
+
507,
|
| 553 |
+
774,
|
| 554 |
+
621,
|
| 555 |
+
789
|
| 556 |
+
],
|
| 557 |
+
"page_idx": 2
|
| 558 |
+
},
|
| 559 |
+
{
|
| 560 |
+
"type": "text",
|
| 561 |
+
"text": "3.1 MT reranking with Bayesian optimization",
|
| 562 |
+
"text_level": 1,
|
| 563 |
+
"bbox": [
|
| 564 |
+
507,
|
| 565 |
+
800,
|
| 566 |
+
880,
|
| 567 |
+
816
|
| 568 |
+
],
|
| 569 |
+
"page_idx": 2
|
| 570 |
+
},
|
| 571 |
+
{
|
| 572 |
+
"type": "text",
|
| 573 |
+
"text": "Our main algorithm is an adaptation of BayesOpt with GPs as described in Section 2.2 to the reranking setting. Each source sentence $x$ and its associated candidate list is treated as a standalone BayesOpt problem, meaning that no observations are shared across different $x$ . Thus for brevity, we",
|
| 574 |
+
"bbox": [
|
| 575 |
+
507,
|
| 576 |
+
822,
|
| 577 |
+
882,
|
| 578 |
+
917
|
| 579 |
+
],
|
| 580 |
+
"page_idx": 2
|
| 581 |
+
},
|
| 582 |
+
{
|
| 583 |
+
"type": "page_number",
|
| 584 |
+
"text": "2851",
|
| 585 |
+
"bbox": [
|
| 586 |
+
480,
|
| 587 |
+
927,
|
| 588 |
+
517,
|
| 589 |
+
940
|
| 590 |
+
],
|
| 591 |
+
"page_idx": 2
|
| 592 |
+
},
|
| 593 |
+
{
|
| 594 |
+
"type": "text",
|
| 595 |
+
"text": "Inputs: main metric $s$ , proxy metric $s'$ , budget $n$ for evaluating $s$ , hypotheses $\\mathcal{C}$ , number of initial main scores $\\alpha$ , number of initial proxy scores $\\beta$ , scoring budget $n$ , batch size $k$ , precomputed multi-fidelity kernel $\\mathcal{K}_{mult}$ .",
|
| 596 |
+
"bbox": [
|
| 597 |
+
112,
|
| 598 |
+
85,
|
| 599 |
+
880,
|
| 600 |
+
133
|
| 601 |
+
],
|
| 602 |
+
"page_idx": 3
|
| 603 |
+
},
|
| 604 |
+
{
|
| 605 |
+
"type": "text",
|
| 606 |
+
"text": "Output: hypothesis with the highest observed score arg $\\max_{y\\in \\mathcal{C}_{\\mathrm{obs}}}s(y)$",
|
| 607 |
+
"bbox": [
|
| 608 |
+
112,
|
| 609 |
+
134,
|
| 610 |
+
653,
|
| 611 |
+
151
|
| 612 |
+
],
|
| 613 |
+
"page_idx": 3
|
| 614 |
+
},
|
| 615 |
+
{
|
| 616 |
+
"type": "code",
|
| 617 |
+
"sub_type": "algorithm",
|
| 618 |
+
"code_caption": [],
|
| 619 |
+
"code_body": "1: $\\mathcal{C}_{\\mathrm{obs}}^{\\prime}\\gets \\binom {\\mathcal{C}}{\\min (\\beta ,|\\mathcal{C}|)},\\mathcal{C}_{\\mathrm{obs}}\\gets \\binom {\\mathcal{C}_{\\mathrm{obs}}^{\\prime}}{\\min (\\alpha ,|\\mathcal{C}|)}$ ▷ Sample initial subsets \n2: $S_{\\mathrm{obs}}\\gets \\{s(y)|y\\in \\mathcal{C}_{\\mathrm{obs}}\\}$ ▷ Compute scores for main scoring function \n3: $S_{\\mathrm{obs}}^{\\prime}\\gets \\{s^{\\prime}(y)|y\\in \\mathcal{C}_{\\mathrm{obs}}^{\\prime}\\}$ ▷ Compute proxy scores \n4: while $|\\mathcal{C}_{\\mathrm{obs}}| < n$ and $|\\mathcal{C}_{\\mathrm{obs}}| < |\\mathcal{C}|$ do \n5: $\\bar{\\mathcal{C}}_{\\mathrm{obs}}\\gets \\mathcal{C}\\setminus \\mathcal{C}_{\\mathrm{obs}}$ ▷ Get complement of $\\mathcal{C}_{\\mathrm{obs}}$ \n6: $\\hat{S}\\leftarrow \\operatorname {Norm}(S_{\\mathrm{obs}}),\\hat{S}^{\\prime}\\leftarrow \\operatorname {Norm}(S_{\\mathrm{obs}}^{\\prime})$ ▷Normalize observed scores to 0 mean, 1 variance \n7: $y_{\\mathrm{best}}\\gets \\arg \\max_{y\\in \\mathcal{C}_{\\mathrm{obs}}}\\hat{S} (y)$ ▷Get best observed point \n8: $\\forall y\\in \\bar{\\mathcal{C}}_{\\mathrm{obs}}:\\mu_y,\\sigma_y\\gets$ calculate posterior using $y,\\kappa_{\\mathrm{mult}},\\hat{S},\\hat{S}^{\\prime}$ ▷GP posterior as in Equations (2) and (3) \n9: $\\forall y\\in \\bar{\\mathcal{C}}_{\\mathrm{obs}}:\\gamma_y\\gets \\operatorname {EI}(y_{\\mathrm{best}},\\mu_y,\\sigma_y)$ ▷Expected improvement as in Equation (5) \n10: $\\mathcal{C}_{\\mathrm{top - k}}\\gets \\arg \\operatorname {topk}_{y\\in \\bar{\\mathcal{C}}_{\\mathrm{obs}}}\\gamma_y$ ▷Select $k$ best hypotheses based on EI \n11: $S_{\\mathrm{obs}}\\gets S_{\\mathrm{obs}}\\cup \\{s(y)|y\\in \\mathcal{C}_{\\mathrm{top - k}}\\}$ ▷Compute scores for selected hypotheses \n12: $C_\\mathrm{obs}\\gets C_\\mathrm{obs}\\cup C_\\mathrm{top - k}$ ▷Update observed hypotheses \n13: end while \n14: return arg max $y\\in \\mathcal{C}_{\\mathrm{obs}}$ s(y)",
|
| 620 |
+
"bbox": [
|
| 621 |
+
124,
|
| 622 |
+
154,
|
| 623 |
+
882,
|
| 624 |
+
411
|
| 625 |
+
],
|
| 626 |
+
"page_idx": 3
|
| 627 |
+
},
|
| 628 |
+
{
|
| 629 |
+
"type": "text",
|
| 630 |
+
"text": "Algorithm 1: The BayesOpt+GP+P algorithm. BayesOpt+GP is a special case of this where $\\beta = 0$ .",
|
| 631 |
+
"bbox": [
|
| 632 |
+
161,
|
| 633 |
+
412,
|
| 634 |
+
831,
|
| 635 |
+
426
|
| 636 |
+
],
|
| 637 |
+
"page_idx": 3
|
| 638 |
+
},
|
| 639 |
+
{
|
| 640 |
+
"type": "text",
|
| 641 |
+
"text": "omit $x$ from notation when discussing BayesOpt for a particular instance.",
|
| 642 |
+
"bbox": [
|
| 643 |
+
112,
|
| 644 |
+
464,
|
| 645 |
+
485,
|
| 646 |
+
495
|
| 647 |
+
],
|
| 648 |
+
"page_idx": 3
|
| 649 |
+
},
|
| 650 |
+
{
|
| 651 |
+
"type": "text",
|
| 652 |
+
"text": "Let $s$ be the scoring function, an MT quality estimator. Let $\\mathcal{C}$ be a set of candidates, $\\mathcal{C}_{\\mathrm{obs}} \\subseteq \\mathcal{C}$ the subset of candidates for which we have observed $s(y)$ , and $\\bar{\\mathcal{C}}_{\\mathrm{obs}}$ be all other $y$ ( $\\bar{\\mathcal{C}}_{\\mathrm{obs}} = \\mathcal{C} \\setminus \\mathcal{C}_{\\mathrm{obs}}$ ). To perform reranking for an instance, we first generate candidates $\\mathcal{C}$ and initialize the algorithm by scoring a random $\\alpha$ -sized subset of the list with $s$ . In one iteration in the algorithm loop, we normalize the observed scores to mean 0 and 1 variance at every step and assume a 0 unconditional mean. Then we compute the GP posterior of all $y \\in \\bar{\\mathcal{C}}_{\\mathrm{obs}}$ with Equation 2 and 3 given the scores of $\\mathcal{C}_{\\mathrm{obs}}$ , which is then used to compute EI with Equation 5, assuming no observation noise. We score the $k$ candidates in $\\bar{\\mathcal{C}}_{\\mathrm{obs}}$ with the highest EI, adding them to $\\mathcal{C}_{\\mathrm{obs}}$ (as well as removing them from $\\bar{\\mathcal{C}}_{\\mathrm{obs}}$ ), and repeat the loop, terminating when a predefined budget of $n$ calls to $s$ is reached (or when all candidates have been evaluated, in the case that $|\\mathcal{C}| \\leq n$ ). Finally, we choose $\\arg \\max_{y \\in \\mathcal{C}_{\\mathrm{obs}}} s(y)$ as the prediction.",
|
| 653 |
+
"bbox": [
|
| 654 |
+
112,
|
| 655 |
+
498,
|
| 656 |
+
485,
|
| 657 |
+
820
|
| 658 |
+
],
|
| 659 |
+
"page_idx": 3
|
| 660 |
+
},
|
| 661 |
+
{
|
| 662 |
+
"type": "text",
|
| 663 |
+
"text": "We now describe our choice of GP kernel. $y\\in \\mathcal{C}$ are strings, and we seek a representation that is fast to compute and to compare, since $|\\mathcal{C}|$ representations are generated, and the computing the GP covariance matrix requires $|\\mathcal{C}|^2$ comparisons. Our kernel is $\\kappa_{\\mathrm{MT}}(y_i,y_j) = \\kappa_{\\mathrm{RBF}}(\\mathrm{emb}(y_i),\\mathrm{emb}(y_j))$",
|
| 664 |
+
"bbox": [
|
| 665 |
+
112,
|
| 666 |
+
822,
|
| 667 |
+
487,
|
| 668 |
+
920
|
| 669 |
+
],
|
| 670 |
+
"page_idx": 3
|
| 671 |
+
},
|
| 672 |
+
{
|
| 673 |
+
"type": "text",
|
| 674 |
+
"text": "where emb returns the mean-pooled token-level outputs of the final decoder layer when generating $y$ , normalized to the unit norm after pooling. emb uses meaning representations produced automatically during candidate list generation, so the additional cost to compute it is negligible. Also, the covariance matrix is fast to compute given the candidate list sizes and embedding dimensionality used in our experiments.",
|
| 675 |
+
"bbox": [
|
| 676 |
+
507,
|
| 677 |
+
464,
|
| 678 |
+
882,
|
| 679 |
+
609
|
| 680 |
+
],
|
| 681 |
+
"page_idx": 3
|
| 682 |
+
},
|
| 683 |
+
{
|
| 684 |
+
"type": "text",
|
| 685 |
+
"text": "3.2 Multi-fidelity BayesOpt",
|
| 686 |
+
"text_level": 1,
|
| 687 |
+
"bbox": [
|
| 688 |
+
507,
|
| 689 |
+
621,
|
| 690 |
+
739,
|
| 691 |
+
639
|
| 692 |
+
],
|
| 693 |
+
"page_idx": 3
|
| 694 |
+
},
|
| 695 |
+
{
|
| 696 |
+
"type": "text",
|
| 697 |
+
"text": "We also propose an extension to BayesOpt+GP for the setting where observations are available from a different but related proxy score function $s'$ . We refer to this as BayesOpt+GP+P. $s'$ is assumed to have non-trivial covariance with the scoring model $s$ and to be cheaper to evaluate. This is known as multi-fidelity BayesOpt in the literature, but while the multi-fidelity settings of Kandasamy et al. (2016); Wu et al. (2020) use acquisition functions that may choose to evaluate lower-fidelity scores, we study a simpler setting: $\\beta$ observations of $s'$ are obtained at the start where $\\beta > \\alpha$ , and only $s$ may be evaluated during the BayesOpt loop. In the multi-fidelity setting, observations are made on $\\langle y_i, s_i \\rangle$ , a combination of a data point and scoring function, instead of the data point alone.",
|
| 698 |
+
"bbox": [
|
| 699 |
+
505,
|
| 700 |
+
644,
|
| 701 |
+
882,
|
| 702 |
+
901
|
| 703 |
+
],
|
| 704 |
+
"page_idx": 3
|
| 705 |
+
},
|
| 706 |
+
{
|
| 707 |
+
"type": "text",
|
| 708 |
+
"text": "Our kernel for BayesOpt $^+$ GP $^+$ P is the product",
|
| 709 |
+
"bbox": [
|
| 710 |
+
526,
|
| 711 |
+
903,
|
| 712 |
+
880,
|
| 713 |
+
919
|
| 714 |
+
],
|
| 715 |
+
"page_idx": 3
|
| 716 |
+
},
|
| 717 |
+
{
|
| 718 |
+
"type": "page_number",
|
| 719 |
+
"text": "2852",
|
| 720 |
+
"bbox": [
|
| 721 |
+
480,
|
| 722 |
+
927,
|
| 723 |
+
519,
|
| 724 |
+
940
|
| 725 |
+
],
|
| 726 |
+
"page_idx": 3
|
| 727 |
+
},
|
| 728 |
+
{
|
| 729 |
+
"type": "text",
|
| 730 |
+
"text": "of the RBF kernel from Section 3.1 and a kernel over score functions $f$ :",
|
| 731 |
+
"bbox": [
|
| 732 |
+
112,
|
| 733 |
+
84,
|
| 734 |
+
487,
|
| 735 |
+
116
|
| 736 |
+
],
|
| 737 |
+
"page_idx": 4
|
| 738 |
+
},
|
| 739 |
+
{
|
| 740 |
+
"type": "equation",
|
| 741 |
+
"text": "\n$$\n\\begin{array}{l} \\mathcal {K} _ {\\mathrm {m u l t}} \\left(\\langle y _ {i}, s _ {k} \\rangle , \\langle y _ {j}, s _ {l} \\rangle\\right) = \\\\ \\mathcal {K} _ {\\mathrm {M T}} \\left(y _ {i}, y _ {j}\\right) \\mathcal {K} _ {\\text {s c o r e}} \\left(s _ {k}, s _ {l}\\right). \\tag {6} \\\\ \\end{array}\n$$\n",
|
| 742 |
+
"text_format": "latex",
|
| 743 |
+
"bbox": [
|
| 744 |
+
124,
|
| 745 |
+
134,
|
| 746 |
+
487,
|
| 747 |
+
172
|
| 748 |
+
],
|
| 749 |
+
"page_idx": 4
|
| 750 |
+
},
|
| 751 |
+
{
|
| 752 |
+
"type": "text",
|
| 753 |
+
"text": "$\\kappa_{\\mathrm{mult}}$ is a valid kernel because a product of two kernels defined on different spaces is also a kernel (Rasmussen and Williams, 2005). With $\\kappa_{\\mathrm{mult}}$ , the covariance between two observations depends on both the difference between scoring functions and the distance between data points. This way, an observation influences the posterior for all other data points at all choices of scoring function, as long as the scoring functions are correlated. This formulation enables the use of any number of scoring functions, but in this work, we consider at most two: the main scorer $s$ and a proxy scorer $s'$ .",
|
| 754 |
+
"bbox": [
|
| 755 |
+
112,
|
| 756 |
+
187,
|
| 757 |
+
487,
|
| 758 |
+
381
|
| 759 |
+
],
|
| 760 |
+
"page_idx": 4
|
| 761 |
+
},
|
| 762 |
+
{
|
| 763 |
+
"type": "text",
|
| 764 |
+
"text": "We set $\\mathcal{K}_{\\mathrm{score}}(s_k, s_l)$ to be the empirical covariance between $s_k$ and $s_l$ measured over a validation set, where all scores are normalized per-instance so that in each instance, the scores of all candidates for a particular scorer have 0 mean and 1 variance. Then for each scoring function, concatenate all candidate scores across instances, and compare the resulting lists to obtain the covariance. Covariance is a valid kernel because the covariance calculation can be expressed as a dot product, and dot products are valid kernels.",
|
| 765 |
+
"bbox": [
|
| 766 |
+
112,
|
| 767 |
+
382,
|
| 768 |
+
487,
|
| 769 |
+
557
|
| 770 |
+
],
|
| 771 |
+
"page_idx": 4
|
| 772 |
+
},
|
| 773 |
+
{
|
| 774 |
+
"type": "text",
|
| 775 |
+
"text": "Proxy scores are incorporated into posterior estimation given by Equations 2 and 3 by redefining $a$ to be a tuple of (data point, scoring function) and $a$ to be a vector of such tuples. The kernel $\\mathcal{K}$ is set to $\\mathcal{K}_{\\mathrm{mult}}$ which takes as input two tuples of data point and scoring function. The full BayesOpt+GP+P algorithm is in Algorithm 1.",
|
| 776 |
+
"bbox": [
|
| 777 |
+
112,
|
| 778 |
+
561,
|
| 779 |
+
489,
|
| 780 |
+
671
|
| 781 |
+
],
|
| 782 |
+
"page_idx": 4
|
| 783 |
+
},
|
| 784 |
+
{
|
| 785 |
+
"type": "text",
|
| 786 |
+
"text": "3.3 Proxy scores",
|
| 787 |
+
"text_level": 1,
|
| 788 |
+
"bbox": [
|
| 789 |
+
112,
|
| 790 |
+
687,
|
| 791 |
+
260,
|
| 792 |
+
702
|
| 793 |
+
],
|
| 794 |
+
"page_idx": 4
|
| 795 |
+
},
|
| 796 |
+
{
|
| 797 |
+
"type": "text",
|
| 798 |
+
"text": "We train smaller scoring models to have high covariance with $s$ for use in BayesOpt+GP+P. In this work, our scoring functions are based on the Comet referenceless quality estimation architecture (Rei et al., 2020), also known as CometKiwi. These models encode the source and hypothesis jointly with a bidirectional transformer. Activations from all transformer layers are pooled to form a fixed-size representation, which is passed to a feed-forward regression head. The vast majority of computation in this models is spent in the encoder. Thus, faster Comet models can be obtained by reducing the size of the encoder.",
|
| 799 |
+
"bbox": [
|
| 800 |
+
112,
|
| 801 |
+
709,
|
| 802 |
+
489,
|
| 803 |
+
917
|
| 804 |
+
],
|
| 805 |
+
"page_idx": 4
|
| 806 |
+
},
|
| 807 |
+
{
|
| 808 |
+
"type": "text",
|
| 809 |
+
"text": "We train Comet models using two differently sized pretrained multilingual encoder models in two ways: (1) training on the same training set as CometKiwi and (2) distillation. Among distillation methods, we attempt in preliminary experiments (1) training on the same training set as CometKiwi with ground truth scores replaced with CometKiwi scores and (2) training on a synthetic dataset comprising of LM samples along with their associated CometKiwi scores. The latter achieves higher correlation with CometKiwi on sampled candidates, which is to be expected since the training distribution is more suitable for the reranking use case. We therefore use this latter distillation method for all subsequent experiments. A similar procedure has been described in Rei et al. (2022a).",
|
| 810 |
+
"bbox": [
|
| 811 |
+
507,
|
| 812 |
+
84,
|
| 813 |
+
884,
|
| 814 |
+
341
|
| 815 |
+
],
|
| 816 |
+
"page_idx": 4
|
| 817 |
+
},
|
| 818 |
+
{
|
| 819 |
+
"type": "text",
|
| 820 |
+
"text": "3.4 Candidate list generation",
|
| 821 |
+
"text_level": 1,
|
| 822 |
+
"bbox": [
|
| 823 |
+
507,
|
| 824 |
+
355,
|
| 825 |
+
754,
|
| 826 |
+
370
|
| 827 |
+
],
|
| 828 |
+
"page_idx": 4
|
| 829 |
+
},
|
| 830 |
+
{
|
| 831 |
+
"type": "text",
|
| 832 |
+
"text": "In preliminary experiments, we consider generating the candidate list using beam search with 128 outputs versus sampling 200 candidates using $\\epsilon$ -sampling (Hewitt et al., 2022) with $\\epsilon = 0.02$ , a setting which effectively balances quality and diversity for MBR (Freitag et al., 2023a). Under beam search, the candidates exhibit high lexical overlap, and while the mean score of candidates is higher, the average maximum score is lower. The effectiveness of truncated sampling over beam search in larger conditional language model has also been observed by Fernandes et al. (2022).",
|
| 833 |
+
"bbox": [
|
| 834 |
+
507,
|
| 835 |
+
376,
|
| 836 |
+
884,
|
| 837 |
+
569
|
| 838 |
+
],
|
| 839 |
+
"page_idx": 4
|
| 840 |
+
},
|
| 841 |
+
{
|
| 842 |
+
"type": "text",
|
| 843 |
+
"text": "Furthermore, beam search suffers from out-of-memory errors on long translations, whereas with sampling, we simply reduce the batch size when out of memory. While it is possible to implement beam search in a batched manner, this does not exist in any popular conditional language model libraries, to the best of our knowledge.",
|
| 844 |
+
"bbox": [
|
| 845 |
+
507,
|
| 846 |
+
571,
|
| 847 |
+
882,
|
| 848 |
+
683
|
| 849 |
+
],
|
| 850 |
+
"page_idx": 4
|
| 851 |
+
},
|
| 852 |
+
{
|
| 853 |
+
"type": "text",
|
| 854 |
+
"text": "For these reasons, we generate 200 candidates per instance with $\\epsilon$ -sampling, $\\epsilon = 0.02$ in all experiments. The sampled candidate list is then deduplicated, resulting in $\\sim 178$ candidates on average per instance.",
|
| 855 |
+
"bbox": [
|
| 856 |
+
507,
|
| 857 |
+
684,
|
| 858 |
+
882,
|
| 859 |
+
763
|
| 860 |
+
],
|
| 861 |
+
"page_idx": 4
|
| 862 |
+
},
|
| 863 |
+
{
|
| 864 |
+
"type": "text",
|
| 865 |
+
"text": "4 Experiments",
|
| 866 |
+
"text_level": 1,
|
| 867 |
+
"bbox": [
|
| 868 |
+
507,
|
| 869 |
+
778,
|
| 870 |
+
655,
|
| 871 |
+
795
|
| 872 |
+
],
|
| 873 |
+
"page_idx": 4
|
| 874 |
+
},
|
| 875 |
+
{
|
| 876 |
+
"type": "text",
|
| 877 |
+
"text": "We now discuss the details and findings of our Bayesian optimization experiments, followed by analysis of our trained proxy scoring models, concluding with runtime measurements. All run time values are measured on a A100-SXM4-40GB GPU. For exact values for figures in this section, see Table 3 in the Appendix. Appendix A contains",
|
| 878 |
+
"bbox": [
|
| 879 |
+
507,
|
| 880 |
+
806,
|
| 881 |
+
884,
|
| 882 |
+
917
|
| 883 |
+
],
|
| 884 |
+
"page_idx": 4
|
| 885 |
+
},
|
| 886 |
+
{
|
| 887 |
+
"type": "page_number",
|
| 888 |
+
"text": "2853",
|
| 889 |
+
"bbox": [
|
| 890 |
+
480,
|
| 891 |
+
927,
|
| 892 |
+
519,
|
| 893 |
+
940
|
| 894 |
+
],
|
| 895 |
+
"page_idx": 4
|
| 896 |
+
},
|
| 897 |
+
{
|
| 898 |
+
"type": "image",
|
| 899 |
+
"img_path": "images/d370ec0f7cab5c01e2cff62b543092aa24ba58d9388ea5a8a05b4944fc5e01c8.jpg",
|
| 900 |
+
"image_caption": [
|
| 901 |
+
"Figure 2: Left: Performance of reranking methods measured as the average CometKiwi score of the selected candidate. Beam search with beam size 5 achieves a score of 0.754 and is too low to be pictured here. Right: percentage of instances where the selected candidate had the highest score (right). The x-axis is the scoring budget. Legends show the normalized area under the curve of CometKiwi score of each method in brackets."
|
| 902 |
+
],
|
| 903 |
+
"image_footnote": [],
|
| 904 |
+
"bbox": [
|
| 905 |
+
127,
|
| 906 |
+
89,
|
| 907 |
+
489,
|
| 908 |
+
262
|
| 909 |
+
],
|
| 910 |
+
"page_idx": 5
|
| 911 |
+
},
|
| 912 |
+
{
|
| 913 |
+
"type": "image",
|
| 914 |
+
"img_path": "images/0dea5bf20d5fb1320008afc676105db4d64c18218cea3e6d428483523935b134.jpg",
|
| 915 |
+
"image_caption": [],
|
| 916 |
+
"image_footnote": [],
|
| 917 |
+
"bbox": [
|
| 918 |
+
497,
|
| 919 |
+
90,
|
| 920 |
+
868,
|
| 921 |
+
263
|
| 922 |
+
],
|
| 923 |
+
"page_idx": 5
|
| 924 |
+
},
|
| 925 |
+
{
|
| 926 |
+
"type": "text",
|
| 927 |
+
"text": "extensive statistical significance tests.",
|
| 928 |
+
"bbox": [
|
| 929 |
+
112,
|
| 930 |
+
365,
|
| 931 |
+
394,
|
| 932 |
+
380
|
| 933 |
+
],
|
| 934 |
+
"page_idx": 5
|
| 935 |
+
},
|
| 936 |
+
{
|
| 937 |
+
"type": "text",
|
| 938 |
+
"text": "For BayesOpt experiments, we grid search for the optimal value of RBF bandwidth parameter $w$ on the entire validation set, setting scoring budget $n = 100$ and batch size $k = 1$ . While it is possible to optimize it for every unique combination of language pair, $n, k$ , proxy scoring function, and $\\beta$ , we find that the results are not statistically significantly different within a range of settings. For simplicity, and to demonstrate the robustness of our methods, we use the same $w$ for all experiments.",
|
| 939 |
+
"bbox": [
|
| 940 |
+
112,
|
| 941 |
+
382,
|
| 942 |
+
487,
|
| 943 |
+
542
|
| 944 |
+
],
|
| 945 |
+
"page_idx": 5
|
| 946 |
+
},
|
| 947 |
+
{
|
| 948 |
+
"type": "text",
|
| 949 |
+
"text": "In all experiments, we use $\\alpha = 10$ initial randomly scored candidates. We set $k = 1$ in Sections 4.2 and 4.3 to demonstrate the effectiveness of BayesOpt+GP under ideal conditions, but since $k$ can have a large impact on speed, we experiment with varying it in Section 4.5.",
|
| 950 |
+
"bbox": [
|
| 951 |
+
112,
|
| 952 |
+
544,
|
| 953 |
+
489,
|
| 954 |
+
640
|
| 955 |
+
],
|
| 956 |
+
"page_idx": 5
|
| 957 |
+
},
|
| 958 |
+
{
|
| 959 |
+
"type": "text",
|
| 960 |
+
"text": "4.1 Models and datasets",
|
| 961 |
+
"text_level": 1,
|
| 962 |
+
"bbox": [
|
| 963 |
+
112,
|
| 964 |
+
653,
|
| 965 |
+
319,
|
| 966 |
+
668
|
| 967 |
+
],
|
| 968 |
+
"page_idx": 5
|
| 969 |
+
},
|
| 970 |
+
{
|
| 971 |
+
"type": "text",
|
| 972 |
+
"text": "For candidate generation, we use the 600M-parameter distilled NLLB model (Team et al., 2022) in all experiments. For the main scoring model, we use CometKiwi-22 (Rei et al., 2022b).",
|
| 973 |
+
"bbox": [
|
| 974 |
+
112,
|
| 975 |
+
676,
|
| 976 |
+
487,
|
| 977 |
+
739
|
| 978 |
+
],
|
| 979 |
+
"page_idx": 5
|
| 980 |
+
},
|
| 981 |
+
{
|
| 982 |
+
"type": "text",
|
| 983 |
+
"text": "As a dataset used for proxy model training, we use data from the WMT Metrics Shared Task up to 2022 (Freitag et al., 2023b), which contains tuples of <source, hypothesis, human score>. The human scores were largely collected with the DA+SQM annotation protocol (Kocmi et al., 2022).",
|
| 984 |
+
"bbox": [
|
| 985 |
+
112,
|
| 986 |
+
741,
|
| 987 |
+
487,
|
| 988 |
+
837
|
| 989 |
+
],
|
| 990 |
+
"page_idx": 5
|
| 991 |
+
},
|
| 992 |
+
{
|
| 993 |
+
"type": "text",
|
| 994 |
+
"text": "For BayesOpt experiments, we select the first 1000 and 500 source sentences per language pair from the WMT23 Metrics Shared Task dataset as the validation and test set, respectively, for 7 language pairs: English-Czech, English-German,",
|
| 995 |
+
"bbox": [
|
| 996 |
+
112,
|
| 997 |
+
839,
|
| 998 |
+
489,
|
| 999 |
+
917
|
| 1000 |
+
],
|
| 1001 |
+
"page_idx": 5
|
| 1002 |
+
},
|
| 1003 |
+
{
|
| 1004 |
+
"type": "text",
|
| 1005 |
+
"text": "English-Japanese, English-Chinese, and the reverse directions of the latter 3 pairs.",
|
| 1006 |
+
"bbox": [
|
| 1007 |
+
507,
|
| 1008 |
+
365,
|
| 1009 |
+
880,
|
| 1010 |
+
395
|
| 1011 |
+
],
|
| 1012 |
+
"page_idx": 5
|
| 1013 |
+
},
|
| 1014 |
+
{
|
| 1015 |
+
"type": "text",
|
| 1016 |
+
"text": "CometKiwi is based on the encoder of XLM-Robertalarge (Conneau et al., 2019) (2.2GB memory). For proxy scorers we train smaller models based on XLM-Robertabase (1.1GB), and Multilingual-MiniLM-L12-H384 (Wang et al., 2020) (469MB).",
|
| 1017 |
+
"bbox": [
|
| 1018 |
+
507,
|
| 1019 |
+
397,
|
| 1020 |
+
882,
|
| 1021 |
+
493
|
| 1022 |
+
],
|
| 1023 |
+
"page_idx": 5
|
| 1024 |
+
},
|
| 1025 |
+
{
|
| 1026 |
+
"type": "text",
|
| 1027 |
+
"text": "4.2 BayesOpt+GP",
|
| 1028 |
+
"text_level": 1,
|
| 1029 |
+
"bbox": [
|
| 1030 |
+
507,
|
| 1031 |
+
504,
|
| 1032 |
+
668,
|
| 1033 |
+
519
|
| 1034 |
+
],
|
| 1035 |
+
"page_idx": 5
|
| 1036 |
+
},
|
| 1037 |
+
{
|
| 1038 |
+
"type": "text",
|
| 1039 |
+
"text": "The goal of reranking BayesOpt+GP is to improve the speed by only evaluating a subset of available candidates. We evaluate this through quality-cost tradeoff curves, where quality is determined by final selected candidate's CometKiwi score, and cost is determined by the number of calls to the scoring function. As another measure of approximation quality, we also show the percentage of instances in which the actual best scoring candidate is returned. We devise several baselines with which to compare BayesOpt+GP. Each is a strategy for selecting a subset of candidates to score from which the best scoring candidate is returned. The baselines are:",
|
| 1040 |
+
"bbox": [
|
| 1041 |
+
505,
|
| 1042 |
+
524,
|
| 1043 |
+
882,
|
| 1044 |
+
734
|
| 1045 |
+
],
|
| 1046 |
+
"page_idx": 5
|
| 1047 |
+
},
|
| 1048 |
+
{
|
| 1049 |
+
"type": "list",
|
| 1050 |
+
"sub_type": "text",
|
| 1051 |
+
"list_items": [
|
| 1052 |
+
"- UniqRandom: Shuffle the candidate list before de-duplication, then de-duplicate while preserving the order of the first appearance of each candidate. Select the first $\\min (n,|\\mathcal{C}|)$ candidates in the resulting list.",
|
| 1053 |
+
"- Logprob{Avg, Sum}: Sort $\\mathcal{C}$ in order of negative sequence log probability (either average or sum), and then select the first $\\min(n, |\\mathcal{C}|)$ .",
|
| 1054 |
+
"- HillClimbing: Let $y^{+}$ be the highest scoring observation point at any time step. Iteratively select $\\arg \\min_{y \\in \\bar{C}_{\\mathrm{obs}}} ||\\mathrm{emb}(y) - \\mathrm{emb}(y^{+})||$ as the next"
|
| 1055 |
+
],
|
| 1056 |
+
"bbox": [
|
| 1057 |
+
507,
|
| 1058 |
+
741,
|
| 1059 |
+
882,
|
| 1060 |
+
920
|
| 1061 |
+
],
|
| 1062 |
+
"page_idx": 5
|
| 1063 |
+
},
|
| 1064 |
+
{
|
| 1065 |
+
"type": "page_number",
|
| 1066 |
+
"text": "2854",
|
| 1067 |
+
"bbox": [
|
| 1068 |
+
480,
|
| 1069 |
+
927,
|
| 1070 |
+
519,
|
| 1071 |
+
940
|
| 1072 |
+
],
|
| 1073 |
+
"page_idx": 5
|
| 1074 |
+
},
|
| 1075 |
+
{
|
| 1076 |
+
"type": "image",
|
| 1077 |
+
"img_path": "images/4121b2adf427c7307608094b00b2756359eb921e123b129910d5b8d9ffe91892.jpg",
|
| 1078 |
+
"image_caption": [
|
| 1079 |
+
"Figure 3: Average CometKiwi score of the selected top candidate (y-axis) for BayesOpt+GP+P with Distilled-S (left) and Distilled-M (right) compared to the ProxyFirst baseline. This figure disregards the additional compute costs for these proxy metrics in order to show the marginal score increase from proxy observations."
|
| 1080 |
+
],
|
| 1081 |
+
"image_footnote": [],
|
| 1082 |
+
"bbox": [
|
| 1083 |
+
127,
|
| 1084 |
+
80,
|
| 1085 |
+
487,
|
| 1086 |
+
286
|
| 1087 |
+
],
|
| 1088 |
+
"page_idx": 6
|
| 1089 |
+
},
|
| 1090 |
+
{
|
| 1091 |
+
"type": "image",
|
| 1092 |
+
"img_path": "images/34ad07629b940b5e49f56cb462918223f9eace6b446709e67db05884164d5dec.jpg",
|
| 1093 |
+
"image_caption": [],
|
| 1094 |
+
"image_footnote": [],
|
| 1095 |
+
"bbox": [
|
| 1096 |
+
514,
|
| 1097 |
+
80,
|
| 1098 |
+
875,
|
| 1099 |
+
286
|
| 1100 |
+
],
|
| 1101 |
+
"page_idx": 6
|
| 1102 |
+
},
|
| 1103 |
+
{
|
| 1104 |
+
"type": "image",
|
| 1105 |
+
"img_path": "images/b2d6322266c841bfbe8f9bbcf90fbd41d848ebd5eb35989d98751ce947fca48d.jpg",
|
| 1106 |
+
"image_caption": [
|
| 1107 |
+
"Figure 4: Average CometKiwi score of the selected candidate (y-axis) for BayesOpt+GP+P with different choices of proxy score."
|
| 1108 |
+
],
|
| 1109 |
+
"image_footnote": [],
|
| 1110 |
+
"bbox": [
|
| 1111 |
+
127,
|
| 1112 |
+
369,
|
| 1113 |
+
484,
|
| 1114 |
+
565
|
| 1115 |
+
],
|
| 1116 |
+
"page_idx": 6
|
| 1117 |
+
},
|
| 1118 |
+
{
|
| 1119 |
+
"type": "text",
|
| 1120 |
+
"text": "observation point until $\\min (n,|\\mathcal{C}|)$ candidates are scored.",
|
| 1121 |
+
"bbox": [
|
| 1122 |
+
127,
|
| 1123 |
+
651,
|
| 1124 |
+
487,
|
| 1125 |
+
682
|
| 1126 |
+
],
|
| 1127 |
+
"page_idx": 6
|
| 1128 |
+
},
|
| 1129 |
+
{
|
| 1130 |
+
"type": "text",
|
| 1131 |
+
"text": "UniqRandom simulates the effect of iteratively sampling candidates until $n$ unique candidates are obtained. LogprobFirst{Avg, Sum} are included to verify whether more advanced methods indeed outperform simple subset selection using statistics obtained for free. HillClimbing is a heuristic iterative selection strategy which, like BayesOpt, is black-box and derivative-free (Conn et al., 2009).",
|
| 1132 |
+
"bbox": [
|
| 1133 |
+
112,
|
| 1134 |
+
694,
|
| 1135 |
+
487,
|
| 1136 |
+
820
|
| 1137 |
+
],
|
| 1138 |
+
"page_idx": 6
|
| 1139 |
+
},
|
| 1140 |
+
{
|
| 1141 |
+
"type": "text",
|
| 1142 |
+
"text": "In Figure 2, BayesOpt+GP outperforms all baselines, and HillClimbing is the best among the baselines, with LogprobAvg following behind. LogprobSum severely underperforms UniqRandom in score, confirming findings on the inadequacy of very high probability translations (Eikema and",
|
| 1143 |
+
"bbox": [
|
| 1144 |
+
112,
|
| 1145 |
+
822,
|
| 1146 |
+
489,
|
| 1147 |
+
917
|
| 1148 |
+
],
|
| 1149 |
+
"page_idx": 6
|
| 1150 |
+
},
|
| 1151 |
+
{
|
| 1152 |
+
"type": "text",
|
| 1153 |
+
"text": "Aziz, 2020). Informally speaking, UniqRandom is a simple \"exploration\" strategy that ignores existing observations, while HillClimbing is a simple \"exploitation\" strategy, only searching over neighbors nearest the best observation while ignoring the full search space. These results confirm that balancing these respective deciderata helps to find the optimal candidate more efficiently.",
|
| 1154 |
+
"bbox": [
|
| 1155 |
+
505,
|
| 1156 |
+
373,
|
| 1157 |
+
884,
|
| 1158 |
+
502
|
| 1159 |
+
],
|
| 1160 |
+
"page_idx": 6
|
| 1161 |
+
},
|
| 1162 |
+
{
|
| 1163 |
+
"type": "text",
|
| 1164 |
+
"text": "4.3 BayesOpt+GP+P",
|
| 1165 |
+
"text_level": 1,
|
| 1166 |
+
"bbox": [
|
| 1167 |
+
507,
|
| 1168 |
+
531,
|
| 1169 |
+
690,
|
| 1170 |
+
546
|
| 1171 |
+
],
|
| 1172 |
+
"page_idx": 6
|
| 1173 |
+
},
|
| 1174 |
+
{
|
| 1175 |
+
"type": "text",
|
| 1176 |
+
"text": "4.3.1 Proxy score evaluation",
|
| 1177 |
+
"text_level": 1,
|
| 1178 |
+
"bbox": [
|
| 1179 |
+
507,
|
| 1180 |
+
563,
|
| 1181 |
+
746,
|
| 1182 |
+
577
|
| 1183 |
+
],
|
| 1184 |
+
"page_idx": 6
|
| 1185 |
+
},
|
| 1186 |
+
{
|
| 1187 |
+
"type": "text",
|
| 1188 |
+
"text": "We first evaluate trained proxy scorers independently of their use in BayesOpt according to (1) actual runtime, (2) correlation with human ratings in the WMT23 dataset, (3) correlation with CometKiwi on source-hypothesis pairs in WMT23, and (4) correlation with CometKiwi on a synthetic candidates for an instance, averaged over instances. For correlations we use Kendall's $\\tau_{c}$ , which is commonly used in MT metric evaluation (Freitag et al., 2023b).",
|
| 1189 |
+
"bbox": [
|
| 1190 |
+
505,
|
| 1191 |
+
592,
|
| 1192 |
+
884,
|
| 1193 |
+
751
|
| 1194 |
+
],
|
| 1195 |
+
"page_idx": 6
|
| 1196 |
+
},
|
| 1197 |
+
{
|
| 1198 |
+
"type": "text",
|
| 1199 |
+
"text": "Table 1 shows the results for the proxy models. The model size corresponds closely to inference time. As desired, training proxies using distillation results in much higher correlation with CometKiwi, although it loses some correlation with human judgments. In subsequent experiments, we consider Distilled- $\\{\\mathbf{S}, \\mathbf{M}\\}$ only. While LogprobAvg has comparatively much lower correlation, we nevertheless consider it as a proxy score since it is obtained for free during candidate generation.",
|
| 1200 |
+
"bbox": [
|
| 1201 |
+
507,
|
| 1202 |
+
758,
|
| 1203 |
+
885,
|
| 1204 |
+
919
|
| 1205 |
+
],
|
| 1206 |
+
"page_idx": 6
|
| 1207 |
+
},
|
| 1208 |
+
{
|
| 1209 |
+
"type": "page_number",
|
| 1210 |
+
"text": "2855",
|
| 1211 |
+
"bbox": [
|
| 1212 |
+
480,
|
| 1213 |
+
927,
|
| 1214 |
+
519,
|
| 1215 |
+
940
|
| 1216 |
+
],
|
| 1217 |
+
"page_idx": 6
|
| 1218 |
+
},
|
| 1219 |
+
{
|
| 1220 |
+
"type": "table",
|
| 1221 |
+
"img_path": "images/cc84bf297eb0ca454719d14c635a6c4633222b281cb224bc60769e8d2bf371f7.jpg",
|
| 1222 |
+
"table_caption": [],
|
| 1223 |
+
"table_footnote": [],
|
| 1224 |
+
"table_body": "<table><tr><td>Model</td><td>Time</td><td>Human Test</td><td>CometKiwi Test</td><td>Cands.</td></tr><tr><td>CometKiwi</td><td>51.38s</td><td>0.245</td><td>1.000</td><td>1.000</td></tr><tr><td>LogprobsAvg</td><td>0.00s</td><td>-</td><td>-</td><td>0.191</td></tr><tr><td>LogprobsSum</td><td>0.00s</td><td>-</td><td>-</td><td>-0.090</td></tr><tr><td>Authentic-S</td><td>7.13s</td><td>0.193</td><td>0.314</td><td>0.350</td></tr><tr><td>Authentic-M</td><td>18.71s</td><td>0.199</td><td>0.320</td><td>0.448</td></tr><tr><td>Distilled-S</td><td>7.13s</td><td>0.169</td><td>0.488</td><td>0.620</td></tr><tr><td>Distilled-M</td><td>18.71s</td><td>0.188</td><td>0.572</td><td>0.680</td></tr></table>",
|
| 1225 |
+
"bbox": [
|
| 1226 |
+
117,
|
| 1227 |
+
80,
|
| 1228 |
+
482,
|
| 1229 |
+
241
|
| 1230 |
+
],
|
| 1231 |
+
"page_idx": 7
|
| 1232 |
+
},
|
| 1233 |
+
{
|
| 1234 |
+
"type": "text",
|
| 1235 |
+
"text": "Table 1: Benchmarking proxy models (Section 3.3) on speed and correlation with human judgments/CometKiwi using the WMT23 dataset. Speed is measured by runtime per 10000 samples using maximum batch size. Correlation is measured with Kendall's $\\tau_{c}$ against human judgments and CometKiwi scores. CometKiwi correlation is taken over the provided targets in WMT23 (Test) and a synthetic dataset comprised of 200 samples per source sentence, deduped (Cands). Logprobs{Avg,Sum} is not evaluated on WMT23 targets because they are generated by other MT systems.",
|
| 1236 |
+
"bbox": [
|
| 1237 |
+
110,
|
| 1238 |
+
250,
|
| 1239 |
+
489,
|
| 1240 |
+
423
|
| 1241 |
+
],
|
| 1242 |
+
"page_idx": 7
|
| 1243 |
+
},
|
| 1244 |
+
{
|
| 1245 |
+
"type": "text",
|
| 1246 |
+
"text": "4.3.2 Reranking results",
|
| 1247 |
+
"text_level": 1,
|
| 1248 |
+
"bbox": [
|
| 1249 |
+
112,
|
| 1250 |
+
449,
|
| 1251 |
+
315,
|
| 1252 |
+
463
|
| 1253 |
+
],
|
| 1254 |
+
"page_idx": 7
|
| 1255 |
+
},
|
| 1256 |
+
{
|
| 1257 |
+
"type": "text",
|
| 1258 |
+
"text": "When $s'$ is sufficiently fast and correlated with $s$ , it can further improve the quality-cost tradeoff in BayesOpt+GP. Recall that BayesOpt+GP+P initializes with $\\beta$ evaluations of $s'$ . Figure 4 shows the quality-cost curve when all proxy scores are known, or $\\beta = 200$ . The relative performance when including proxy scores correspond to their correlation with CometKiwi as shown in Table 1; Distilled-M outperforms Distilled-S, and both outperform LogprobAvg. This demonstrates the importance of ensuring high correlation in the proxy score. The addition of LogprobAvg to BayesOpt+GP has little effect, showing that poorly correlated proxies are too noisy to help and may even hinder performance. Beyond $n = 70$ , all methods achieve close to the maximum attainable score.",
|
| 1259 |
+
"bbox": [
|
| 1260 |
+
110,
|
| 1261 |
+
467,
|
| 1262 |
+
489,
|
| 1263 |
+
724
|
| 1264 |
+
],
|
| 1265 |
+
"page_idx": 7
|
| 1266 |
+
},
|
| 1267 |
+
{
|
| 1268 |
+
"type": "text",
|
| 1269 |
+
"text": "We also examine the effect of initializing with a fraction of proxy observations rather than all of them. For some choice of $\\beta$ , an appropriate baseline is to rank the top- $n$ candidates among the $\\beta$ observed proxy scores. We call this ProxyFirst. The results when using Distilled-M and Distilled-S as proxies are shown in Figure 3. In both cases, the difference between BayesOpt+GP+P and ProxyFirst is smaller when $\\beta = 200$ than when $\\beta = 50$ , and this gap is smaller for Distilled-M. This is to be expected because as the covariance of $s$ and $s'$ increases, using ProxyFirst with $\\beta = 200$ approaches",
|
| 1270 |
+
"bbox": [
|
| 1271 |
+
110,
|
| 1272 |
+
726,
|
| 1273 |
+
489,
|
| 1274 |
+
917
|
| 1275 |
+
],
|
| 1276 |
+
"page_idx": 7
|
| 1277 |
+
},
|
| 1278 |
+
{
|
| 1279 |
+
"type": "text",
|
| 1280 |
+
"text": "standard full-list reranking. The marginal benefit of BayesOpt+GP+P is more clear when $\\beta = 50$ , where proxy scores help to find promising candidates earlier.",
|
| 1281 |
+
"bbox": [
|
| 1282 |
+
507,
|
| 1283 |
+
84,
|
| 1284 |
+
884,
|
| 1285 |
+
148
|
| 1286 |
+
],
|
| 1287 |
+
"page_idx": 7
|
| 1288 |
+
},
|
| 1289 |
+
{
|
| 1290 |
+
"type": "text",
|
| 1291 |
+
"text": "Overall, proxy observations can indeed improve quality for a particular $n$ . However, for sufficiently large $n$ , BayesOpt+GP converges, so proxy observations are unnecessary. Proxy evaluations add to the runtime cost which we discuss in Section 4.4. Therefore, while we show that the multi-fidelity kernel is capable of leveraging proxy scores to improve search, in practice, the overall computational budget should be considered along with the quality and cost of the proxy scoring function to ensure that using the method is worthwhile.",
|
| 1292 |
+
"bbox": [
|
| 1293 |
+
507,
|
| 1294 |
+
149,
|
| 1295 |
+
885,
|
| 1296 |
+
326
|
| 1297 |
+
],
|
| 1298 |
+
"page_idx": 7
|
| 1299 |
+
},
|
| 1300 |
+
{
|
| 1301 |
+
"type": "text",
|
| 1302 |
+
"text": "4.4 Runtime",
|
| 1303 |
+
"text_level": 1,
|
| 1304 |
+
"bbox": [
|
| 1305 |
+
507,
|
| 1306 |
+
344,
|
| 1307 |
+
626,
|
| 1308 |
+
357
|
| 1309 |
+
],
|
| 1310 |
+
"page_idx": 7
|
| 1311 |
+
},
|
| 1312 |
+
{
|
| 1313 |
+
"type": "text",
|
| 1314 |
+
"text": "Our reranking algorithm significantly reduces actual runtime compared to scoring all candidates for a source sentence. We profile the full pipeline, from generating candidates to making a final selection, on three settings: (1) BayesOpt+GP with $n = 90$ , and (2) multi-fidelity BayesOpt+GP with 50 Distilled-S scores and $n = 70$ , and 3) the baseline of evaluating CometKiwi on all candidates. $n, \\beta$ are selected to balance the final scores of the two algorithms (0.8213 and 0.8211 respectively, as shown in Table 3).",
|
| 1315 |
+
"bbox": [
|
| 1316 |
+
507,
|
| 1317 |
+
368,
|
| 1318 |
+
884,
|
| 1319 |
+
545
|
| 1320 |
+
],
|
| 1321 |
+
"page_idx": 7
|
| 1322 |
+
},
|
| 1323 |
+
{
|
| 1324 |
+
"type": "text",
|
| 1325 |
+
"text": "For the runtime calculations, we select 50 source sentences from each language pair and generate 200 candidates for each. For the baseline, we compute scores for all candidates with a batch size of 200. For BayesOpt+GP methods, we profile the additional steps required: computing the kernel, computing the posteriors at each step, and evaluating proxy scores. BayesOpt+GP(+S) uses batch size $k = 10$ , which does not affect scores compared to using $k = 1$ (see Section 4.5). Memory bandwidth can be a major overhead in large neural networks, making it inefficient to run small batches. Since BayesOpt+GP obtains $k$ candidates per step, in order to use large batches, we process candidates for multiple instances in parallel.",
|
| 1326 |
+
"bbox": [
|
| 1327 |
+
507,
|
| 1328 |
+
546,
|
| 1329 |
+
885,
|
| 1330 |
+
788
|
| 1331 |
+
],
|
| 1332 |
+
"page_idx": 7
|
| 1333 |
+
},
|
| 1334 |
+
{
|
| 1335 |
+
"type": "text",
|
| 1336 |
+
"text": "Results are shown in Table 2. In all cases, candidate generation and CometKiwi calculations dominate the overall runtime. The extra cost from BayesOpt-related computations is compensated by the savings from reducing CometKiwi evaluations, despite similarity matrix computation being $\\mathcal{O}(|\\mathcal{C}|^2)$ , and matrix inversion for posteriori calculation at each iteration being $\\mathcal{O}(|\\mathcal{C}|^3)$ .",
|
| 1337 |
+
"bbox": [
|
| 1338 |
+
507,
|
| 1339 |
+
790,
|
| 1340 |
+
885,
|
| 1341 |
+
920
|
| 1342 |
+
],
|
| 1343 |
+
"page_idx": 7
|
| 1344 |
+
},
|
| 1345 |
+
{
|
| 1346 |
+
"type": "page_number",
|
| 1347 |
+
"text": "2856",
|
| 1348 |
+
"bbox": [
|
| 1349 |
+
480,
|
| 1350 |
+
928,
|
| 1351 |
+
521,
|
| 1352 |
+
940
|
| 1353 |
+
],
|
| 1354 |
+
"page_idx": 7
|
| 1355 |
+
},
|
| 1356 |
+
{
|
| 1357 |
+
"type": "text",
|
| 1358 |
+
"text": "BayesOpt+GP+P with Distilled-S reduces the runtime by further reducing the number of CometKiwi calculations to 70, with the cost of loading and running the Distilled-S proxy metric introducing minimal overhead.",
|
| 1359 |
+
"bbox": [
|
| 1360 |
+
112,
|
| 1361 |
+
84,
|
| 1362 |
+
487,
|
| 1363 |
+
164
|
| 1364 |
+
],
|
| 1365 |
+
"page_idx": 8
|
| 1366 |
+
},
|
| 1367 |
+
{
|
| 1368 |
+
"type": "table",
|
| 1369 |
+
"img_path": "images/e7e7c0dd7024870ee62ed181b863c94c4857553755de449a2df5b65cb15e2564.jpg",
|
| 1370 |
+
"table_caption": [],
|
| 1371 |
+
"table_footnote": [],
|
| 1372 |
+
"table_body": "<table><tr><td>Operation</td><td>AllComet</td><td>BayesOpt +GP n = 90</td><td>BayesOpt +GP+P n = 70,β = 50</td></tr><tr><td>Candidates</td><td>701.38</td><td>701.38</td><td>701.38</td></tr><tr><td>Similarities</td><td>-</td><td>1.24</td><td>1.24</td></tr><tr><td>BayesOpt+GP</td><td>-</td><td>1.92</td><td>2.25</td></tr><tr><td>Comet Loading</td><td>8.43</td><td>8.43</td><td>11.27</td></tr><tr><td>Distilled-S</td><td>-</td><td>-</td><td>11.11</td></tr><tr><td>CometKiwi</td><td>274.87</td><td>188.39</td><td>146.33</td></tr><tr><td>Total</td><td>984.68</td><td>901.36</td><td>873.58</td></tr></table>",
|
| 1373 |
+
"bbox": [
|
| 1374 |
+
122,
|
| 1375 |
+
172,
|
| 1376 |
+
485,
|
| 1377 |
+
332
|
| 1378 |
+
],
|
| 1379 |
+
"page_idx": 8
|
| 1380 |
+
},
|
| 1381 |
+
{
|
| 1382 |
+
"type": "text",
|
| 1383 |
+
"text": "Table 2: Runtimes for the full reranking baseline (AllComet), BayesOpt+GP, and BayesOpt+GP+P with Distilled-S as proxy score at settings where CometKiwi scores are roughly equal. Time given in seconds per 350 instances.",
|
| 1384 |
+
"bbox": [
|
| 1385 |
+
112,
|
| 1386 |
+
341,
|
| 1387 |
+
487,
|
| 1388 |
+
413
|
| 1389 |
+
],
|
| 1390 |
+
"page_idx": 8
|
| 1391 |
+
},
|
| 1392 |
+
{
|
| 1393 |
+
"type": "text",
|
| 1394 |
+
"text": "4.5 Batch size $k$ in BayesOpt+GP",
|
| 1395 |
+
"text_level": 1,
|
| 1396 |
+
"bbox": [
|
| 1397 |
+
112,
|
| 1398 |
+
442,
|
| 1399 |
+
394,
|
| 1400 |
+
458
|
| 1401 |
+
],
|
| 1402 |
+
"page_idx": 8
|
| 1403 |
+
},
|
| 1404 |
+
{
|
| 1405 |
+
"type": "text",
|
| 1406 |
+
"text": "We examine the effect of batch size $k$ in BayesOpt+GP for $k = 1,2,5,10$ . Figure 5 shows that as expected, larger $k$ diminishes performance, although the differences nearly vanish at $n > 70$ .",
|
| 1407 |
+
"bbox": [
|
| 1408 |
+
112,
|
| 1409 |
+
462,
|
| 1410 |
+
487,
|
| 1411 |
+
526
|
| 1412 |
+
],
|
| 1413 |
+
"page_idx": 8
|
| 1414 |
+
},
|
| 1415 |
+
{
|
| 1416 |
+
"type": "text",
|
| 1417 |
+
"text": "$k$ impacts how often the BayesOpt loop is run and thus has a large effect on speed. Fortunately, we observe for sufficiently large $n$ , $k$ can be increased without sacrificing quality.",
|
| 1418 |
+
"bbox": [
|
| 1419 |
+
112,
|
| 1420 |
+
527,
|
| 1421 |
+
489,
|
| 1422 |
+
592
|
| 1423 |
+
],
|
| 1424 |
+
"page_idx": 8
|
| 1425 |
+
},
|
| 1426 |
+
{
|
| 1427 |
+
"type": "image",
|
| 1428 |
+
"img_path": "images/1b2b6d4eeea2da104a821494a8635484be71cffba95405cf304a2aceb62e785b.jpg",
|
| 1429 |
+
"image_caption": [
|
| 1430 |
+
"Figure 5: Difference between BayesOpt+GP with batch size of 1 (top line in red in Figure 2) and BayesOpt+GP with higher batch sizes. Negative values mean that higher batch size performed worse than BayesOpt+GP with batch size of 1."
|
| 1431 |
+
],
|
| 1432 |
+
"image_footnote": [],
|
| 1433 |
+
"bbox": [
|
| 1434 |
+
136,
|
| 1435 |
+
608,
|
| 1436 |
+
465,
|
| 1437 |
+
744
|
| 1438 |
+
],
|
| 1439 |
+
"page_idx": 8
|
| 1440 |
+
},
|
| 1441 |
+
{
|
| 1442 |
+
"type": "text",
|
| 1443 |
+
"text": "5 Conclusion",
|
| 1444 |
+
"text_level": 1,
|
| 1445 |
+
"bbox": [
|
| 1446 |
+
112,
|
| 1447 |
+
862,
|
| 1448 |
+
245,
|
| 1449 |
+
876
|
| 1450 |
+
],
|
| 1451 |
+
"page_idx": 8
|
| 1452 |
+
},
|
| 1453 |
+
{
|
| 1454 |
+
"type": "text",
|
| 1455 |
+
"text": "In this work, we formalize MT reranking as a Bayesian optimization problem, leveraging the ba",
|
| 1456 |
+
"bbox": [
|
| 1457 |
+
112,
|
| 1458 |
+
887,
|
| 1459 |
+
489,
|
| 1460 |
+
919
|
| 1461 |
+
],
|
| 1462 |
+
"page_idx": 8
|
| 1463 |
+
},
|
| 1464 |
+
{
|
| 1465 |
+
"type": "text",
|
| 1466 |
+
"text": "sic observation that similar translations are more likely to have similar quality scores. We also extend the framework to accept observations from proxy scoring functions, which is applicable when the target score is very costly: large QE models, MBR, or human evaluation. In realistic experiments, we show that our methods improve reranking efficiency over strong baselines. We also propose several design choices that make the methods useful in practice; a GP kernel that requires minimal overhead, and effective proxy model training via distillation.",
|
| 1467 |
+
"bbox": [
|
| 1468 |
+
505,
|
| 1469 |
+
84,
|
| 1470 |
+
884,
|
| 1471 |
+
275
|
| 1472 |
+
],
|
| 1473 |
+
"page_idx": 8
|
| 1474 |
+
},
|
| 1475 |
+
{
|
| 1476 |
+
"type": "text",
|
| 1477 |
+
"text": "We consider our work a first step in applying BayesOpt to MT reranking. Future directions include integrating BayesOpt with candidate generation, alternative acquisition functions, and further exploration of GP kernels for MT.",
|
| 1478 |
+
"bbox": [
|
| 1479 |
+
507,
|
| 1480 |
+
278,
|
| 1481 |
+
884,
|
| 1482 |
+
357
|
| 1483 |
+
],
|
| 1484 |
+
"page_idx": 8
|
| 1485 |
+
},
|
| 1486 |
+
{
|
| 1487 |
+
"type": "text",
|
| 1488 |
+
"text": "6 Limitations",
|
| 1489 |
+
"text_level": 1,
|
| 1490 |
+
"bbox": [
|
| 1491 |
+
507,
|
| 1492 |
+
370,
|
| 1493 |
+
645,
|
| 1494 |
+
386
|
| 1495 |
+
],
|
| 1496 |
+
"page_idx": 8
|
| 1497 |
+
},
|
| 1498 |
+
{
|
| 1499 |
+
"type": "text",
|
| 1500 |
+
"text": "The optimization problem considered in this work is to maximize score from a scoring model. We show that BayesOpt is an effective optimizer, but we do not explore to what extent the optimization problem is flawed due to flaws in the scoring model. We refer to Kocmi et al. (2024b) to understand what magnitude of score difference between systems is significant. However, the existence of \"metric overfitting\" when directly optimizing an evaluation metric is debated and may affect the interpretation of score differences (Fernandes et al., 2022; Wang et al., 2024).",
|
| 1501 |
+
"bbox": [
|
| 1502 |
+
505,
|
| 1503 |
+
397,
|
| 1504 |
+
882,
|
| 1505 |
+
589
|
| 1506 |
+
],
|
| 1507 |
+
"page_idx": 8
|
| 1508 |
+
},
|
| 1509 |
+
{
|
| 1510 |
+
"type": "text",
|
| 1511 |
+
"text": "BayesOpt+GP requires matrix inversion, a $\\mathcal{O}(|\\mathcal{C}|^3)$ operation that is performed once per iteration. While it is inexpensive for the $|\\mathcal{C}|$ we consider, this limits the number of observations that can be used for posterior computation without resorting to approximations (Noack et al., 2023).",
|
| 1512 |
+
"bbox": [
|
| 1513 |
+
507,
|
| 1514 |
+
590,
|
| 1515 |
+
882,
|
| 1516 |
+
686
|
| 1517 |
+
],
|
| 1518 |
+
"page_idx": 8
|
| 1519 |
+
},
|
| 1520 |
+
{
|
| 1521 |
+
"type": "text",
|
| 1522 |
+
"text": "As an iterative algorithm, BayesOpt can score no more than $k$ candidates in a batch for a single instance. Small batch sizes introduce a significant bottleneck for large neural networks, so in order to maintain large batch sizes, we propose processing multiple instances in parallel. However, this requires additional engineering.",
|
| 1523 |
+
"bbox": [
|
| 1524 |
+
507,
|
| 1525 |
+
687,
|
| 1526 |
+
882,
|
| 1527 |
+
800
|
| 1528 |
+
],
|
| 1529 |
+
"page_idx": 8
|
| 1530 |
+
},
|
| 1531 |
+
{
|
| 1532 |
+
"type": "text",
|
| 1533 |
+
"text": "Acknowledgements",
|
| 1534 |
+
"text_level": 1,
|
| 1535 |
+
"bbox": [
|
| 1536 |
+
509,
|
| 1537 |
+
813,
|
| 1538 |
+
680,
|
| 1539 |
+
829
|
| 1540 |
+
],
|
| 1541 |
+
"page_idx": 8
|
| 1542 |
+
},
|
| 1543 |
+
{
|
| 1544 |
+
"type": "text",
|
| 1545 |
+
"text": "Julius Cheng is supported by a scholarship from Huawei. Part of this work received support from the European Union's Horizon research and innovation programme under grant agreement No 101135798, project Meetween (My Personal AI",
|
| 1546 |
+
"bbox": [
|
| 1547 |
+
507,
|
| 1548 |
+
838,
|
| 1549 |
+
882,
|
| 1550 |
+
917
|
| 1551 |
+
],
|
| 1552 |
+
"page_idx": 8
|
| 1553 |
+
},
|
| 1554 |
+
{
|
| 1555 |
+
"type": "page_number",
|
| 1556 |
+
"text": "2857",
|
| 1557 |
+
"bbox": [
|
| 1558 |
+
480,
|
| 1559 |
+
927,
|
| 1560 |
+
519,
|
| 1561 |
+
940
|
| 1562 |
+
],
|
| 1563 |
+
"page_idx": 8
|
| 1564 |
+
},
|
| 1565 |
+
{
|
| 1566 |
+
"type": "text",
|
| 1567 |
+
"text": "Mediator for Virtual MEETtings BetWEEN People. We thank the organizers of MT Marathon 2024, where the authors met and this work was conceived. We also thank Beni Egressy for useful discussions and Will Tebbutt for lending expertise on GPs.",
|
| 1568 |
+
"bbox": [
|
| 1569 |
+
112,
|
| 1570 |
+
84,
|
| 1571 |
+
489,
|
| 1572 |
+
181
|
| 1573 |
+
],
|
| 1574 |
+
"page_idx": 9
|
| 1575 |
+
},
|
| 1576 |
+
{
|
| 1577 |
+
"type": "text",
|
| 1578 |
+
"text": "References",
|
| 1579 |
+
"text_level": 1,
|
| 1580 |
+
"bbox": [
|
| 1581 |
+
115,
|
| 1582 |
+
206,
|
| 1583 |
+
213,
|
| 1584 |
+
222
|
| 1585 |
+
],
|
| 1586 |
+
"page_idx": 9
|
| 1587 |
+
},
|
| 1588 |
+
{
|
| 1589 |
+
"type": "list",
|
| 1590 |
+
"sub_type": "ref_text",
|
| 1591 |
+
"list_items": [
|
| 1592 |
+
"Daniel Beck and Trevor Cohn. 2017. Learning kernels over strings using Gaussian processes. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers), 67-73. Asian Federation of Natural Language Processing.",
|
| 1593 |
+
"Daniel Beck, Trevor Cohn, and Lucia Specia. 2014. Joint emotion analysis via multi-task Gaussian processes. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), 1798-1803. Association for Computational Linguistics.",
|
| 1594 |
+
"Daniel Beck, Kashif Shah, Trevor Cohn, and Lucia Specia. 2013. SHEF-Lite: When less is more for translation quality estimation. In Proceedings of the Eighth Workshop on Statistical Machine Translation, 337-342. Association for Computational Linguistics.",
|
| 1595 |
+
"James Bergstra, Rémi Bardenet, Yoshua Bengio, and Balázs Kégl. 2011. Algorithms for hyper-parameter optimization. In Advances in Neural Information Processing Systems, volume 24. Curran Associates, Inc.",
|
| 1596 |
+
"Sumanta Bhattacharyya, Amirmohammad Rooshenas, Subhajit Naskar, Simeng Sun, Mohit Iyyer, and Andrew McCallum. 2021. Energy-based reranking: Improving neural machine translation using energy-based models. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), 4528-4537. Association for Computational Linguistics.",
|
| 1597 |
+
"Eugene Charniak and Mark Johnson. 2005. Coarse-to-fine n-best parsing and MaxEnt discriminative reranking. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL '05), 173-180. Association for Computational Linguistics.",
|
| 1598 |
+
"Lingjiao Chen, Matei Zaharia, and James Zou. 2023. Frugalgpt: How to use large language models while reducing cost and improving performance.",
|
| 1599 |
+
"Julius Cheng and Andreas Vlachos. 2023. Faster minimum Bayes risk decoding with confidence-based pruning. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, 12473-12480. Association for Computational Linguistics."
|
| 1600 |
+
],
|
| 1601 |
+
"bbox": [
|
| 1602 |
+
115,
|
| 1603 |
+
230,
|
| 1604 |
+
489,
|
| 1605 |
+
917
|
| 1606 |
+
],
|
| 1607 |
+
"page_idx": 9
|
| 1608 |
+
},
|
| 1609 |
+
{
|
| 1610 |
+
"type": "list",
|
| 1611 |
+
"sub_type": "ref_text",
|
| 1612 |
+
"list_items": [
|
| 1613 |
+
"Michael Collins and Terry Koo. 2005. Discriminative reranking for natural language parsing. Computational Linguistics, 31(1):25-70.",
|
| 1614 |
+
"Andrew R. Conn, Katya Scheinberg, and Luis N. Vicente. 2009. Introduction to Derivative-Free Optimization. Society for Industrial and Applied Mathematics.",
|
| 1615 |
+
"Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishray Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettle-moyer, and Veselin Stoyanov. 2019. Unsupervised cross-lingual representation learning at scale. CoRR, abs/1911.02116.",
|
| 1616 |
+
"Hiroyuki Deguchi, Yusuke Sakai, Hidetaka Kamigaito, Taro Watanabe, Hideki Tanaka, and Masao Utiyama. 2024. Centroid-based efficient minimum Bayes risk decoding. In Findings of the Association for Computational Linguistics ACL 2024, 11009-11018, Bangkok, Thailand and virtual meeting. Association for Computational Linguistics.",
|
| 1617 |
+
"Bryan Eikema and Wilker Aziz. 2020. Is MAP decoding all you need? the inadequacy of the mode in neural machine translation. In Proceedings of the 28th International Conference on Computational Linguistics, 4506-4520. International Committee on Computational Linguistics.",
|
| 1618 |
+
"Bryan Eikema and Wilker Aziz. 2022. Sampling-based approximations to minimum Bayes risk decoding for neural machine translation. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, 10978-10993. Association for Computational Linguistics.",
|
| 1619 |
+
"Patrick Fernandes, Antonio Farinhas, Ricardo Rei, José G. C. de Souza, Perez Ogayo, Graham Neubig, and Andre Martins. 2022. Quality-aware decoding for neural machine translation. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 1396-1412. Association for Computational Linguistics.",
|
| 1620 |
+
"Mara Finkelstein, Subhajit Naskar, Mehdi Mirzazadeh, Apurva Shah, and Markus Freitag. 2024. MBR and QE finetuning: Training-time distillation of the best and most expensive decoding methods.",
|
| 1621 |
+
"Markus Freitag, Behrooz Ghorbani, and Patrick Fernandes. 2023a. Epsilon sampling rocks: Investigating sampling strategies for minimum Bayes risk decoding for machine translation. In *Findings of the Association for Computational Linguistics: EMNLP* 2023, 9198–9209. Association for Computational Linguistics.",
|
| 1622 |
+
"Markus Freitag, David Grangier, Qijun Tan, and Bowen Liang. 2022. High quality rather than high model probability: Minimum Bayes risk decoding with neural metrics. Transactions of the Association for Computational Linguistics, 10:811-825."
|
| 1623 |
+
],
|
| 1624 |
+
"bbox": [
|
| 1625 |
+
510,
|
| 1626 |
+
84,
|
| 1627 |
+
884,
|
| 1628 |
+
917
|
| 1629 |
+
],
|
| 1630 |
+
"page_idx": 9
|
| 1631 |
+
},
|
| 1632 |
+
{
|
| 1633 |
+
"type": "page_number",
|
| 1634 |
+
"text": "2858",
|
| 1635 |
+
"bbox": [
|
| 1636 |
+
480,
|
| 1637 |
+
928,
|
| 1638 |
+
519,
|
| 1639 |
+
940
|
| 1640 |
+
],
|
| 1641 |
+
"page_idx": 9
|
| 1642 |
+
},
|
| 1643 |
+
{
|
| 1644 |
+
"type": "list",
|
| 1645 |
+
"sub_type": "ref_text",
|
| 1646 |
+
"list_items": [
|
| 1647 |
+
"Markus Freitag, Nitika Mathur, Chi-kiu Lo, Eleftherios Avramidis, Ricardo Rei, Brian Thompson, Tom Kocmi, Frederic Blain, Daniel Deutsch, Craig Stewart, Chrysoula Zerva, Sheila Castilho, Alon Lavie, and George Foster. 2023b. Results of WMT23 metrics shared task: Metrics might be guilty but references are not innocent. In Proceedings of the Eighth Conference on Machine Translation, 578-628. Association for Computational Linguistics.",
|
| 1648 |
+
"Philipp Hennig and Christian J. Schuler. 2011. Entropy search for information-efficient global optimization. ArXiv, abs/1112.1217.",
|
| 1649 |
+
"John Hewitt, Christopher Manning, and Percy Liang. 2022. Truncation sampling as language model desmoothing. In *Findings of the Association for Computational Linguistics: EMNLP* 2022, 3414-3427. Association for Computational Linguistics.",
|
| 1650 |
+
"Donald R. Jones. 2001. A taxonomy of global optimization methods based on response surfaces. Journal of Global Optimization, 21:345-383.",
|
| 1651 |
+
"Juraj Juraska, Mara Finkelstein, Daniel Deutsch, Aditya Siddhant, Mehdi Mirzazadeh, and Markus Freitag. 2023. MetricX-23: The Google submission to the WMT 2023 metrics shared task. In Proceedings of the Eighth Conference on Machine Translation, 756–767. Association for Computational Linguistics.",
|
| 1652 |
+
"Kirthevasan Kandasamy, Gautam Dasarathy, Junier B Oliva, Jeff Schneider, and Barnabas Poczos. 2016. Gaussian process bandit optimisation with multifidelity evaluations. In Advances in Neural Information Processing Systems, volume 29. Curran Associates, Inc.",
|
| 1653 |
+
"Tom Kocmi, Eleftherios Avramidis, Rachel Bawden, Ondrej Bojar, Anton Dvorkovich, Christian Federmann, Mark Fishel, Markus Freitag, Thamme Gowda, Roman Grundkiewicz, Barry Haddow, Marzena Karpinska, Philipp Koehn, Benjamin Marie, Christof Monz, Kenton Murray, Masaaki Nagata, Martin Popel, Maja Popovic, Mariya Shmatova, Steinthor Steingrímsson, and Vilém Zouhar. 2024a. Findings of the WMT24 general machine translation shared task: The LLM era is here but MT is not solved yet. In Proceedings of the Ninth Conference on Machine Translation, 1-46, Miami, Florida, USA. Association for Computational Linguistics.",
|
| 1654 |
+
"Tom Kocmi, Rachel Bawden, Ondrej Bojar, Anton Dvorkovich, Christian Federmann, Mark Fishel, Thamme Gowda, Yvette Graham, Roman Grundkiewicz, Barry Haddow, Rebecca Knowles, Philipp Koehn, Christof Monz, Makoto Morishita, Masaaki Nagata, Toshiaki Nakazawa, Michal Novak, Martin Popel, and Maja Popovic. 2022. Findings of the 2022 conference on machine translation (WMT22). In Proceedings of the Seventh Conference on Machine Translation (WMT), 1-45. Association for Computational Linguistics."
|
| 1655 |
+
],
|
| 1656 |
+
"bbox": [
|
| 1657 |
+
115,
|
| 1658 |
+
85,
|
| 1659 |
+
489,
|
| 1660 |
+
917
|
| 1661 |
+
],
|
| 1662 |
+
"page_idx": 10
|
| 1663 |
+
},
|
| 1664 |
+
{
|
| 1665 |
+
"type": "list",
|
| 1666 |
+
"sub_type": "ref_text",
|
| 1667 |
+
"list_items": [
|
| 1668 |
+
"Tom Kocmi, Vilém Zouhar, Christian Federmann, and Matt Post. 2024b. Navigating the metrics maze: Reconciling score magnitudes and accuracies. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 1999-2014, Bangkok, Thailand. Association for Computational Linguistics.",
|
| 1669 |
+
"Andreas Krause, Ajit Singh, and Carlos Guestrin. 2008. Near-optimal sensor placements in gaussian processes: Theory, efficient algorithms and empirical studies. J. Mach. Learn. Res., 9:235-284.",
|
| 1670 |
+
"Ann Lee, Michael Auli, and Marc'Aurelio Ranzato. 2021. Discriminative reranking for neural machine translation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume I: Long Papers), 7250-7264. Association for Computational Linguistics.",
|
| 1671 |
+
"Jonas Mockus. 1974. On bayesian methods for seeking the extremum. In Proceedings of the IFIP Technical Conference, 400-404.",
|
| 1672 |
+
"Marcus M. Noack, Harinarayan Krishnan, Mark D. Risser, and Kristofer G. Reyes. 2023. Exact gaussian processes for massive datasets via non-stationary sparsity-discovering kernels. Scientific Reports, 13(1).",
|
| 1673 |
+
"Slav Petrov. 2011. Coarse-to-Fine Natural Language Processing (Theory and Applications of Natural Language Processing). Springer Publishing Company, Incorporated.",
|
| 1674 |
+
"C.E. Rasmussen and C.K.I. Williams. 2005. Gaussian Processes for Machine Learning. Adaptive Computation and Machine Learning series. MIT Press.",
|
| 1675 |
+
"Mathieu Ravaut, Shafiq Joty, and Nancy Chen. 2022. SummaReranker: A multi-task mixture-of-experts re-ranking framework for abstractive summarization. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 4504-4524. Association for Computational Linguistics.",
|
| 1676 |
+
"Ricardo Rei, Ana C Farinha, José G.C. de Souza, Pedro G. Ramos, André F.T. Martins, Luisa Coheur, and Alon Lavie. 2022a. Searching for COMETINHO: The little metric that could. In Proceedings of the 23rd Annual Conference of the European Association for Machine Translation, 61-70. European Association for Machine Translation.",
|
| 1677 |
+
"Ricardo Rei, Jose Pombal, Nuno M. Guerreiro, João Alves, Pedro Henrique Martins, Patrick Fernandes, Helena Wu, Tania Vaz, Duarte Alves, Amin Farajian, Sweta Agrawal, Antonio Farinhas, José G. C. De Souza, and André Martins. 2024. Tower v2: Unbabel-IST 2024 submission for the general MT shared task. In Proceedings of the Ninth Conference on Machine Translation, 185–204, Miami, Florida, USA. Association for Computational Linguistics."
|
| 1678 |
+
],
|
| 1679 |
+
"bbox": [
|
| 1680 |
+
510,
|
| 1681 |
+
85,
|
| 1682 |
+
882,
|
| 1683 |
+
917
|
| 1684 |
+
],
|
| 1685 |
+
"page_idx": 10
|
| 1686 |
+
},
|
| 1687 |
+
{
|
| 1688 |
+
"type": "page_number",
|
| 1689 |
+
"text": "2859",
|
| 1690 |
+
"bbox": [
|
| 1691 |
+
480,
|
| 1692 |
+
928,
|
| 1693 |
+
519,
|
| 1694 |
+
940
|
| 1695 |
+
],
|
| 1696 |
+
"page_idx": 10
|
| 1697 |
+
},
|
| 1698 |
+
{
|
| 1699 |
+
"type": "list",
|
| 1700 |
+
"sub_type": "ref_text",
|
| 1701 |
+
"list_items": [
|
| 1702 |
+
"Ricardo Rei, Craig Stewart, Ana C Farinha, and Alon Lavie. 2020. COMET: A neural framework for MT evaluation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2685-2702. Association for Computational Linguistics.",
|
| 1703 |
+
"Ricardo Rei, Marcos Treviso, Nuno M. Guerreiro, Chrysoula Zerva, Ana C Farinha, Christine Maroti, José G. C. de Souza, Taisiya Glushkova, Duarte Alves, Luisa Coheur, Alon Lavie, and André F. T. Martins. 2022b. CometKiwi: IST-unbabel 2022 submission for the quality estimation shared task. In Proceedings of the Seventh Conference on Machine Translation (WMT), 634-645. Association for Computational Linguistics.",
|
| 1704 |
+
"Thibault Sellam, Dipanjan Das, and Ankur Parikh. 2020. BLEURT: Learning robust metrics for text generation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 7881-7892. Association for Computational Linguistics.",
|
| 1705 |
+
"Bobak Shahriari, Kevin Swersky, Ziyun Wang, Ryan P. Adams, and Nando de Freitas. 2016. Taking the human out of the loop: A review of bayesian optimization. Proceedings of the IEEE, 104:148-175.",
|
| 1706 |
+
"Prasann Singhal, Jiacheng Xu, Xi Ye, and Greg Durrett. 2023. EEL: Efficiently encoding lattices for reranking. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 9299-9316. Association for Computational Linguistics.",
|
| 1707 |
+
"Niranjan Srinivas, Andreas Krause, Sham M. Kakade, and Matthias W. Seeger. 2009. Information-theoretic regret bounds for gaussian process optimization in the bandit setting. IEEE Transactions on Information Theory, 58:3250-3265.",
|
| 1708 |
+
"NLLB Team, Marta R. Costa-jussà, James Cross, Onur Celebi, Maha Elbayad, Kenneth Heafield, Kevin Heffernan, Elahe Kalbassi, Janice Lam, Daniel Licht, Jean Maillard, Anna Sun, Skyler Wang, Guillaume Wenzek, Al Youngblood, Bapi Akula, Loic Barrault, Gabriel Mejia Gonzalez, Prangthip Hansanti, John Hoffman, Semarley Jarrett, Kaushik Ram Sadagopan, Dirk Rowe, Shannon Spruit, Chau Tran, Pierre Andrews, Necip Fazil Ayan, Shruti Bhosale, Sergey Edunov, Angela Fan, Cynthia Gao, Vedanuj Goswami, Francisco Guzmán, Philipp Koehn, Alexandre Mourachko, Christophe Ropers, Safiyyah Saleem, Holger Schwenk, and Jeff Wang. 2022. No language left behind: Scaling human-centered machine translation.",
|
| 1709 |
+
"Firas Trabelsi, David Vilar, Mara Finkelstein, and Markus Freitag. 2024. Efficient minimum bayes risk decoding using low-rank matrix completion algorithms.",
|
| 1710 |
+
"Jannis Vamvas and Rico Sennrich. 2024. Linear-time minimum Bayes risk decoding with reference aggregation. In Proceedings of the 62nd Annual Meeting"
|
| 1711 |
+
],
|
| 1712 |
+
"bbox": [
|
| 1713 |
+
115,
|
| 1714 |
+
85,
|
| 1715 |
+
489,
|
| 1716 |
+
917
|
| 1717 |
+
],
|
| 1718 |
+
"page_idx": 11
|
| 1719 |
+
},
|
| 1720 |
+
{
|
| 1721 |
+
"type": "list",
|
| 1722 |
+
"sub_type": "ref_text",
|
| 1723 |
+
"list_items": [
|
| 1724 |
+
"of the Association for Computational Linguistics (Volume 2: Short Papers), 790-801, Bangkok, Thailand. Association for Computational Linguistics.",
|
| 1725 |
+
"Giorgos Vernikos and Andrei Popescu-Belis. 2024. Don't rank, combine! combining machine translation hypotheses using quality estimation. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 12087-12105, Bangkok, Thailand. Association for Computational Linguistics.",
|
| 1726 |
+
"Jun Wang, Eleftheria Briakou, Hamid Dadkhahi, Rishabh Agarwal, Colin Cherry, and Trevor Cohn. 2024. Don't throw away data: Better sequence knowledge distillation.",
|
| 1727 |
+
"Wenhui Wang, Furu Wei, Li Dong, Hangbo Bao, Nan Yang, and Ming Zhou. 2020. MiniLM: Deep self-attention distillation for task-agnostic compression of pre-trained transformers.",
|
| 1728 |
+
"Jian Wu, Saul Toscano-Palmerin, Peter I. Frazier, and Andrew Gordon Wilson. 2020. Practical multifidelity bayesian optimization for hyperparameter tuning. In Proceedings of The 35th Uncertainty in Artificial Intelligence Conference, volume 115 of Proceedings of Machine Learning Research, 788-798. PMLR.",
|
| 1729 |
+
"Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Lukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyls, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2016. Google's neural machine translation system: Bridging the gap between human and machine translation.",
|
| 1730 |
+
"Kyra Yee, Yann Dauphin, and Michael Auli. 2019. Simple and effective noisy channel modeling for neural machine translation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), 5696-5701. Association for Computational Linguistics.",
|
| 1731 |
+
"Chrysoula Zerva, Frédéric Blain, Ricardo Rei, Piyawat Lertvittayakumjorn, José G. C. de Souza, Steffen Eger, Diptesh Kanojia, Duarte Alves, Constantin Orasan, Marina Fomicheva, André F. T. Martins, and Lucia Specia. 2022. Findings of the WMT 2022 shared task on quality estimation. In Proceedings of the Seventh Conference on Machine Translation (WMT), 69-99. Association for Computational Linguistics."
|
| 1732 |
+
],
|
| 1733 |
+
"bbox": [
|
| 1734 |
+
510,
|
| 1735 |
+
85,
|
| 1736 |
+
882,
|
| 1737 |
+
864
|
| 1738 |
+
],
|
| 1739 |
+
"page_idx": 11
|
| 1740 |
+
},
|
| 1741 |
+
{
|
| 1742 |
+
"type": "page_number",
|
| 1743 |
+
"text": "2860",
|
| 1744 |
+
"bbox": [
|
| 1745 |
+
480,
|
| 1746 |
+
928,
|
| 1747 |
+
519,
|
| 1748 |
+
940
|
| 1749 |
+
],
|
| 1750 |
+
"page_idx": 11
|
| 1751 |
+
},
|
| 1752 |
+
{
|
| 1753 |
+
"type": "table",
|
| 1754 |
+
"img_path": "images/8b0a96c406454857b48c92f932f6117be7de93c065ce282b3b581bb7c32ab779.jpg",
|
| 1755 |
+
"table_caption": [],
|
| 1756 |
+
"table_footnote": [],
|
| 1757 |
+
"table_body": "<table><tr><td rowspan=\"2\">Method</td><td rowspan=\"2\">Figure</td><td rowspan=\"2\">10</td><td rowspan=\"2\">20</td><td rowspan=\"2\">30</td><td rowspan=\"2\">40</td><td colspan=\"6\">CometKiwi runs</td></tr><tr><td>50</td><td>60</td><td>70</td><td>80</td><td>90</td><td>100</td></tr><tr><td>UniqRandom</td><td>2</td><td>0.7917</td><td>0.8022</td><td>0.8074</td><td>0.8104</td><td>0.8124</td><td>0.8140</td><td>0.8149</td><td>0.8160</td><td>0.8168</td><td>0.8175</td></tr><tr><td>LogprobAvg</td><td>2</td><td>0.7956</td><td>0.8055</td><td>0.8101</td><td>0.8129</td><td>0.8149</td><td>0.8162</td><td>0.8171</td><td>0.8181</td><td>0.8187</td><td>0.8193</td></tr><tr><td>LogprobSum</td><td>2</td><td>0.7519</td><td>0.7723</td><td>0.7834</td><td>0.7913</td><td>0.7974</td><td>0.8019</td><td>0.8051</td><td>0.8081</td><td>0.8109</td><td>0.8125</td></tr><tr><td>HillClimbing</td><td>2</td><td>0.7917</td><td>0.8080</td><td>0.8124</td><td>0.8148</td><td>0.8165</td><td>0.8176</td><td>0.8184</td><td>0.8191</td><td>0.8196</td><td>0.8200</td></tr><tr><td>ProxyFirst 200 Distilled-S</td><td>3</td><td>0.8081</td><td>0.8141</td><td>0.8167</td><td>0.8181</td><td>0.8190</td><td>0.8197</td><td>0.8202</td><td>0.8206</td><td>0.8208</td><td>0.8210</td></tr><tr><td>ProxyFirst 200 Distilled-M</td><td>3</td><td>0.8119</td><td>0.8165</td><td>0.8184</td><td>0.8194</td><td>0.8201</td><td>0.8206</td><td>0.8209</td><td>0.8211</td><td>0.8212</td><td>0.8213</td></tr><tr><td>ProxyFirst 50 Distilled-S</td><td>3</td><td>0.8054</td><td>0.8100</td><td>0.8114</td><td>0.8121</td><td>0.8124</td><td>-</td><td>-</td><td>-</td><td>-</td><td></td></tr><tr><td>ProxyFirst 50 Distilled-M</td><td>3</td><td>0.8073</td><td>0.8107</td><td>0.8119</td><td>0.8122</td><td>0.8124</td><td>-</td><td>-</td><td>-</td><td>-</td><td></td></tr><tr><td>BayesOpt+GP</td><td>2,4,3</td><td>0.7917</td><td>0.8121</td><td>0.8167</td><td>0.8190</td><td>0.8201</td><td>0.8206</td><td>0.8210</td><td>0.8212</td><td>0.8213</td><td>0.8214</td></tr><tr><td>BayesOpt+GP+P with LogprobAvg</td><td>4</td><td>0.7956</td><td>0.8123</td><td>0.8166</td><td>0.8187</td><td>0.8198</td><td>0.8205</td><td>0.8208</td><td>0.8210</td><td>0.8213</td><td>0.8214</td></tr><tr><td>BayesOpt+GP+P with 200 Distilled-S</td><td>4,3</td><td>0.8081</td><td>0.8165</td><td>0.8190</td><td>0.8200</td><td>0.8207</td><td>0.8210</td><td>0.8212</td><td>0.8213</td><td>0.8214</td><td>0.8215</td></tr><tr><td>BayesOpt+GP+P with 200 Distilled-M</td><td>4,3</td><td>0.8119</td><td>0.8182</td><td>0.8199</td><td>0.8205</td><td>0.8209</td><td>0.8211</td><td>0.8213</td><td>0.8214</td><td>0.8215</td><td>0.8215</td></tr><tr><td>BayesOpt+GP+P with 50 Distilled-S</td><td>4,3</td><td>0.8054</td><td>0.8153</td><td>0.8184</td><td>0.8196</td><td>0.8204</td><td>0.8208</td><td>0.8210</td><td>0.8213</td><td>0.8214</td><td>0.8214</td></tr><tr><td>BayesOpt+GP+P with 50 Distilled-M</td><td>4,3</td><td>0.8073</td><td>0.8164</td><td>0.8187</td><td>0.8200</td><td>0.8207</td><td>0.8209</td><td>0.8211</td><td>0.8213</td><td>0.8214</td><td>0.8215</td></tr></table>",
|
| 1758 |
+
"bbox": [
|
| 1759 |
+
119,
|
| 1760 |
+
82,
|
| 1761 |
+
877,
|
| 1762 |
+
278
|
| 1763 |
+
],
|
| 1764 |
+
"page_idx": 12
|
| 1765 |
+
},
|
| 1766 |
+
{
|
| 1767 |
+
"type": "table",
|
| 1768 |
+
"img_path": "images/26b897f7f4bde48b89469e99cc4ba09271295e3a545ce3fceb2214d4e24a108a.jpg",
|
| 1769 |
+
"table_caption": [],
|
| 1770 |
+
"table_footnote": [],
|
| 1771 |
+
"table_body": "<table><tr><td rowspan=\"2\">Method</td><td rowspan=\"2\">Figure</td><td rowspan=\"2\">110</td><td rowspan=\"2\">120</td><td rowspan=\"2\">130</td><td rowspan=\"2\">140</td><td colspan=\"6\">CometKiwi runs</td></tr><tr><td>150</td><td>160</td><td>170</td><td>180</td><td>190</td><td>200</td></tr><tr><td>UniqRandom</td><td>2</td><td>0.8182</td><td>0.8188</td><td>0.8192</td><td>0.8197</td><td>0.8200</td><td>0.8205</td><td>0.8208</td><td>0.8211</td><td>0.8214</td><td>0.8216</td></tr><tr><td>LogprobAvg</td><td>2</td><td>0.8199</td><td>0.8203</td><td>0.8205</td><td>0.8209</td><td>0.8211</td><td>0.8212</td><td>0.8213</td><td>0.8214</td><td>0.8216</td><td>0.8216</td></tr><tr><td>LogprobSum</td><td>2</td><td>0.8139</td><td>0.8156</td><td>0.8170</td><td>0.8180</td><td>0.8188</td><td>0.8196</td><td>0.8204</td><td>0.8209</td><td>0.8212</td><td>0.8216</td></tr><tr><td>HillClimbing</td><td>2</td><td>0.8203</td><td>0.8206</td><td>0.8208</td><td>0.8209</td><td>0.8211</td><td>0.8213</td><td>0.8214</td><td>0.8215</td><td>0.8216</td><td>0.8216</td></tr><tr><td>BayesOpt+GP</td><td>2,4,3</td><td>0.8215</td><td>0.8215</td><td>0.8215</td><td>0.8216</td><td>0.8216</td><td>0.8216</td><td>0.8216</td><td>0.8216</td><td>0.8216</td><td>0.8216</td></tr><tr><td>BayesOpt+GP+P with LogprobAvg</td><td>4</td><td>0.8214</td><td>0.8215</td><td>0.8215</td><td>0.8216</td><td>0.8216</td><td>0.8216</td><td>0.8216</td><td>0.8216</td><td>0.8216</td><td>0.8216</td></tr></table>",
|
| 1772 |
+
"bbox": [
|
| 1773 |
+
119,
|
| 1774 |
+
297,
|
| 1775 |
+
877,
|
| 1776 |
+
404
|
| 1777 |
+
],
|
| 1778 |
+
"page_idx": 12
|
| 1779 |
+
},
|
| 1780 |
+
{
|
| 1781 |
+
"type": "text",
|
| 1782 |
+
"text": "Table 3: Exact values (selected candidate score) for Figures 2 to 4.",
|
| 1783 |
+
"bbox": [
|
| 1784 |
+
272,
|
| 1785 |
+
414,
|
| 1786 |
+
724,
|
| 1787 |
+
429
|
| 1788 |
+
],
|
| 1789 |
+
"page_idx": 12
|
| 1790 |
+
},
|
| 1791 |
+
{
|
| 1792 |
+
"type": "page_number",
|
| 1793 |
+
"text": "2861",
|
| 1794 |
+
"bbox": [
|
| 1795 |
+
480,
|
| 1796 |
+
928,
|
| 1797 |
+
519,
|
| 1798 |
+
940
|
| 1799 |
+
],
|
| 1800 |
+
"page_idx": 12
|
| 1801 |
+
},
|
| 1802 |
+
{
|
| 1803 |
+
"type": "text",
|
| 1804 |
+
"text": "A Statistical Significance",
|
| 1805 |
+
"text_level": 1,
|
| 1806 |
+
"bbox": [
|
| 1807 |
+
114,
|
| 1808 |
+
84,
|
| 1809 |
+
346,
|
| 1810 |
+
99
|
| 1811 |
+
],
|
| 1812 |
+
"page_idx": 13
|
| 1813 |
+
},
|
| 1814 |
+
{
|
| 1815 |
+
"type": "text",
|
| 1816 |
+
"text": "We measure statistical significance between two methods based on the final candidate CometKiwi scores with either budget 30, 60, 90, or across the budget range from 10 to 190 in Table 4. To determine whether one method is better than another one, we use one-sided paired Student's t-test with p-value threshold 0.01 which is run across the individual samples.",
|
| 1817 |
+
"bbox": [
|
| 1818 |
+
112,
|
| 1819 |
+
109,
|
| 1820 |
+
882,
|
| 1821 |
+
174
|
| 1822 |
+
],
|
| 1823 |
+
"page_idx": 13
|
| 1824 |
+
},
|
| 1825 |
+
{
|
| 1826 |
+
"type": "table",
|
| 1827 |
+
"img_path": "images/341f21c287c43dd8697799fd63b5897fa05044fd38fa3b924eb7d3744534fa66.jpg",
|
| 1828 |
+
"table_caption": [],
|
| 1829 |
+
"table_footnote": [],
|
| 1830 |
+
"table_body": "<table><tr><td>Budget 30</td><td>UniqRandom</td><td>LogprobAvg</td><td>LogprobSum</td><td>HillClimbing</td><td>BayesOpt+GP</td><td>Budget 60</td><td>UniqRandom</td><td>LogprobAvg</td><td>LogprobSum</td><td>HillClimbing</td><td>BayesOpt+GP</td></tr><tr><td>LogprobAvg</td><td>←</td><td>←</td><td>↑</td><td>↑</td><td></td><td>LogprobAvg</td><td>←</td><td>←</td><td>↑</td><td>↑</td><td></td></tr><tr><td>LogprobSum</td><td>↑</td><td>↑</td><td></td><td>↑</td><td>↑</td><td>LogprobSum</td><td>↑</td><td>↑</td><td></td><td>↑</td><td>↑</td></tr><tr><td>HillClimbing</td><td>←</td><td>←</td><td>←</td><td></td><td>↑</td><td>HillClimbing</td><td>←</td><td>←</td><td>←</td><td></td><td>↑</td></tr><tr><td>ProxyFirst 200 Distilled-S</td><td>←</td><td>←</td><td>←</td><td>←</td><td></td><td>ProxyFirst 200 Distilled-S</td><td>←</td><td>←</td><td>←</td><td>↑</td><td></td></tr><tr><td>ProxyFirst 200 Distilled-M</td><td>←</td><td>←</td><td>←</td><td>←</td><td>←</td><td>ProxyFirst 200 Distilled-M</td><td>←</td><td>←</td><td>←</td><td>←</td><td></td></tr><tr><td>ProxyFirst 50 Distilled-S</td><td>←</td><td>←</td><td>←</td><td>↑</td><td>↑</td><td></td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>ProxyFirst 50 Distilled-M</td><td>←</td><td>←</td><td>←</td><td></td><td>↑</td><td></td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>BayesOpt+GP</td><td>←</td><td>←</td><td>←</td><td>←</td><td></td><td>BayesOpt+GP</td><td>←</td><td>←</td><td>←</td><td></td><td></td></tr><tr><td>BayesOpt+GP+P with LogprobAvg</td><td>←</td><td>←</td><td>←</td><td>←</td><td></td><td>BayesOpt+GP+P with LogprobAvg</td><td>←</td><td>←</td><td>←</td><td></td><td></td></tr><tr><td>BayesOpt+GP+P with 200 Distilled-S</td><td>←</td><td>←</td><td>←</td><td>←</td><td>←</td><td>BayesOpt+GP+P with 200 Distilled-S</td><td>←</td><td>←</td><td>←</td><td>←</td><td></td></tr><tr><td>BayesOpt+GP+P with 200 Distilled-M</td><td>←</td><td>←</td><td>←</td><td>←</td><td>←</td><td>BayesOpt+GP+P with 200 Distilled-M</td><td>←</td><td>←</td><td>←</td><td>←</td><td></td></tr><tr><td>BayesOpt+GP+P with 50 Distilled-S</td><td>←</td><td>←</td><td>←</td><td>←</td><td>←</td><td>BayesOpt+GP+P with 50 Distilled-S</td><td>←</td><td>←</td><td>←</td><td>←</td><td></td></tr><tr><td>BayesOpt+GP+P with 50 Distilled-M</td><td>←</td><td>←</td><td>←</td><td>←</td><td>←</td><td>BayesOpt+GP+P with 50 Distilled-M</td><td>←</td><td>←</td><td>←</td><td>←</td><td></td></tr></table>",
|
| 1831 |
+
"bbox": [
|
| 1832 |
+
117,
|
| 1833 |
+
183,
|
| 1834 |
+
868,
|
| 1835 |
+
406
|
| 1836 |
+
],
|
| 1837 |
+
"page_idx": 13
|
| 1838 |
+
},
|
| 1839 |
+
{
|
| 1840 |
+
"type": "table",
|
| 1841 |
+
"img_path": "images/e7ada8294b6fa0cbffe3101dd9c93019ba2e03dd6b24ae18e024e2fd4a20c5a4.jpg",
|
| 1842 |
+
"table_caption": [],
|
| 1843 |
+
"table_footnote": [],
|
| 1844 |
+
"table_body": "<table><tr><td>Budget 90</td><td>UniqRandom</td><td>LogprobAvg</td><td>LogprobSum</td><td>HillClimbing</td><td>BayesOpt+GP</td><td>Across budgets 10 to 190</td><td>UniqRandom</td><td>LogprobAvg</td><td>LogprobSum</td><td>HillClimbing</td><td>BayesOpt+GP</td></tr><tr><td>LogprobAvg</td><td>←</td><td>←</td><td>↑</td><td>↑</td><td></td><td>LogprobAvg</td><td>←</td><td>←</td><td>↑</td><td>↑</td><td></td></tr><tr><td>LogprobSum</td><td>↑</td><td>↑</td><td>↑</td><td>↑</td><td></td><td>LogprobSum</td><td>↑</td><td>↑</td><td>↑</td><td>↑</td><td></td></tr><tr><td>HillClimbing</td><td>←</td><td>←</td><td>←</td><td>↑</td><td></td><td>HillClimbing</td><td>←</td><td>←</td><td>←</td><td>↑</td><td></td></tr><tr><td>ProxyFirst 200 Distilled-S</td><td>←</td><td>←</td><td>←</td><td>←</td><td>↑</td><td>ProxyFirst 200 Distilled-S</td><td>←</td><td>←</td><td>←</td><td>←</td><td></td></tr><tr><td>ProxyFirst 200 Distilled-M</td><td>←</td><td>←</td><td>←</td><td>←</td><td></td><td>ProxyFirst 200 Distilled-M</td><td>←</td><td>←</td><td>←</td><td>←</td><td></td></tr><tr><td>BayesOpt+GP+P</td><td>←</td><td>←</td><td>←</td><td>←</td><td></td><td>BayesOpt+GP</td><td>←</td><td>←</td><td>←</td><td>←</td><td></td></tr><tr><td>BayesOpt+GP+P with LogprobAvg</td><td>←</td><td>←</td><td>←</td><td>←</td><td></td><td>BayesOpt+GP+P with LogprobAvg</td><td>←</td><td>←</td><td>←</td><td>←</td><td></td></tr><tr><td>BayesOpt+GP+P with 200 Distilled-S</td><td>←</td><td>←</td><td>←</td><td>←</td><td></td><td>BayesOpt+GP+P with 200 Distilled-S</td><td>←</td><td>←</td><td>←</td><td>←</td><td></td></tr><tr><td>BayesOpt+GP+P with 200 Distilled-M</td><td>←</td><td>←</td><td>←</td><td>←</td><td>←</td><td>BayesOpt+GP+P with 200 Distilled-M</td><td>←</td><td>←</td><td>←</td><td>←</td><td></td></tr><tr><td>BayesOpt+GP+P with 50 Distilled-S</td><td>←</td><td>←</td><td>←</td><td>←</td><td></td><td>BayesOpt+GP+P with 50 Distilled-S</td><td>←</td><td>←</td><td>←</td><td>←</td><td></td></tr><tr><td>BayesOpt+GP+P with 50 Distilled-M</td><td>←</td><td>←</td><td>←</td><td>←</td><td></td><td>BayesOpt+GP+P with 50 Distilled-M</td><td>←</td><td>←</td><td>←</td><td>←</td><td></td></tr></table>",
|
| 1845 |
+
"bbox": [
|
| 1846 |
+
117,
|
| 1847 |
+
430,
|
| 1848 |
+
868,
|
| 1849 |
+
627
|
| 1850 |
+
],
|
| 1851 |
+
"page_idx": 13
|
| 1852 |
+
},
|
| 1853 |
+
{
|
| 1854 |
+
"type": "text",
|
| 1855 |
+
"text": "Table 4: Statistical significance comparison between proposed methods across various CometKiwi calls budgets. Within a cell, $\\uparrow$ means that the column method (in header) is statistically significantly better than the row method and $\\leftarrow$ means the opposite. If a cell is empty, none of the methods are significantly better than the other one. For example, in Budget 30 (top left) table, in third row and first column, $\\leftarrow$ means that HillClimbing is significantly better than UniqRandom in the setup of budget of 30.",
|
| 1856 |
+
"bbox": [
|
| 1857 |
+
112,
|
| 1858 |
+
637,
|
| 1859 |
+
884,
|
| 1860 |
+
709
|
| 1861 |
+
],
|
| 1862 |
+
"page_idx": 13
|
| 1863 |
+
},
|
| 1864 |
+
{
|
| 1865 |
+
"type": "page_number",
|
| 1866 |
+
"text": "2862",
|
| 1867 |
+
"bbox": [
|
| 1868 |
+
480,
|
| 1869 |
+
928,
|
| 1870 |
+
519,
|
| 1871 |
+
940
|
| 1872 |
+
],
|
| 1873 |
+
"page_idx": 13
|
| 1874 |
+
}
|
| 1875 |
+
]
|
2025/A Bayesian Optimization Approach to Machine Translation Reranking/e6893c9d-09ab-4c0e-a429-b12ff05203f6_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2025/A Bayesian Optimization Approach to Machine Translation Reranking/e6893c9d-09ab-4c0e-a429-b12ff05203f6_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:09377ff87ea7f05aedca4302290003746edbbdbe3648de939b74d9010db53da0
|
| 3 |
+
size 661778
|
2025/A Bayesian Optimization Approach to Machine Translation Reranking/full.md
ADDED
|
@@ -0,0 +1,356 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# A Bayesian Optimization Approach to Machine Translation Reranking
|
| 2 |
+
|
| 3 |
+
Julius Cheng<sup>1</sup> Maike Züflé<sup>2</sup> Vilém Zouhar<sup>3</sup> Andreas Vlachos<sup>1</sup>
|
| 4 |
+
|
| 5 |
+
$^{1}$ University of Cambridge $^{2}$ Karlsruhe Institute of Technology $^{3}$ ETH Zürich {jncc3, av308}@cam.ac.uk maize.zuefle@kit.edu vzouhar@ethz.ch
|
| 6 |
+
|
| 7 |
+
# Abstract
|
| 8 |
+
|
| 9 |
+
Reranking, or scoring a list of prediction candidates from a machine translation system with an external scoring model and returning the highest-scoring candidate, remains a simple and effective method for improving prediction quality. However, reranking with high quality scoring models can add substantial computational cost to the translation pipeline, which we address in this work by framing list reranking as a Bayesian optimization (BayesOpt) problem over the candidate list, where unknown scores are modeled with a Gaussian process. This algorithm scores candidates iteratively, choosing next candidates by balancing between exploration, choosing to score those that differ from candidates already scored, and exploitation, choosing to score those that resemble high-scoring candidates. This procedure finds high-scoring candidates while scoring only a fraction of the candidates list; given candidate lists of 200 random samples (before dedduplication), our method achieves the same CometKiwi score using only 70 scoring evaluations on average compared to scoring a random subset of 180 candidates. We also propose multi-fidelity BayesOpt for list reranking, where scores obtained from a noisier but cheaper proxy scoring model are incorporated into the search process. We show that well-trained distilled proxy scorers can further improve the performance of BayesOpt.
|
| 10 |
+
|
| 11 |
+
# 1 Introduction
|
| 12 |
+
|
| 13 |
+
Reranking is a framework for prediction where probabilistic generator model produces a list of candidates, and a separate evaluator or scoring model produces scores for each of the candidates which are used to determine the final prediction. Reranking has a long history in natural language processing for sequential prediction problems such as dependency parsing (Collins and Koo, 2005;
|
| 14 |
+
|
| 15 |
+

|
| 16 |
+
|
| 17 |
+

|
| 18 |
+
Figure 1: A machine translation system generates candidates Aa, Bb, Cc, Dd, and Ee. The goal of BayesOpt is to find the highest scoring candidate with fewer scoring calls. An acquisition function selects the next candidate to score repeatedly until budget is reached, and the candidate with the highest score so far is returned.
|
| 19 |
+
|
| 20 |
+
Charniak and Johnson, 2005) and language modeling problems such as summarization (Ravaut et al., 2022) and machine translation (MT; Fernandes et al., 2022).
|
| 21 |
+
|
| 22 |
+
The quality of models for automatic MT evaluation has surged in recent years due to innovations in neural network architecture (Rei et al., 2020; Juraska et al., 2023; Sellam et al., 2020) as well as the abundance of training data (Freitag et al., 2023b; Kocmi et al., 2024a). These evaluation models are often repurposed for reranking to further improve the performance of an MT system. For instance, in the WMT 2024 shared task (Kocmi et al., 2024a), 5 out of 19 systems, including the overall best submission (Rei et al., 2024) use reranking with Comet models (Rei et al., 2020) and/or minimum Bayes risk decoding (MBR; Eikema and Aziz, 2020), which can be interpreted as a form of reranking. Prior to the application of automatic evaluation metrics to reranking, other scoring methods have been proposed, including discriminatively
|
| 23 |
+
|
| 24 |
+
trained classifiers (Lee et al., 2021; Bhattacharyya et al., 2021) and noisy channel decoding (Yee et al., 2019).
|
| 25 |
+
|
| 26 |
+
So, while LMs for MT generation for greatly improved in recent years, scoring models have seen a commensurate increase in quality (Zerva et al., 2022), and thus reranking remains relevant method for improving translation quality. However, the scoring models have also grown dramatically in size, increasing the computational requirements for reranking.
|
| 27 |
+
|
| 28 |
+
In this work, we address the computational cost of reranking by framing it as a search problem over the list of candidates. The goal of search in this setting is to find high-scoring candidates in a small number of steps, thereby avoiding the cost of scoring the full list. Our proposed algorithm uses Gaussian processes to model uncertainty about unseen scores and Bayesian optimization (BayesOpt; Shahriari et al., 2016) to choose which candidates to score next.
|
| 29 |
+
|
| 30 |
+
GPs are flexible priors over functions which are able to model the complex and nonlinear relationship between each candidate and its score. GPs make very few assumptions about the distribution and base their predictions are mostly on observed points, which enables them to easily adapt to different candidate lists across translation instances. BayesOpt is a sequential black-box optimization method that uses the posterior mean and variance of unobserved data points to decide which points to evaluate next.
|
| 31 |
+
|
| 32 |
+
We apply BayesOpt and GPs (BayesOpt+GP) to MT list reranking in a straightforward manner and show that it obtains close to the maximum achievable score with only a fraction of score evaluations. For example, the maximal obtainable score across 200 randomly sampled candidates on our test set is 0.8216 CometKiwi; our method achieves 0.8210 with 70 score evaluations on average, while scoring 70 random candidates attains 0.8149, a difference of 0.0061 which is likely to be human-detectable according to (Kocmi et al., 2024b). We also propose a number of search-based baselines which outperform random selection, all of which are outperformed by BayesOpt+GP.
|
| 33 |
+
|
| 34 |
+
Then, building upon previous works that use a faster but noisier proxy scoring function to prune the candidate list (Fernandes et al., 2022; Eikema and Aziz, 2022), we propose a multi-fidelity extension to BayesOpt which incorporates proxy scores
|
| 35 |
+
|
| 36 |
+
to improve estimation. This is related in motivation to coarse-to-fine methods (Petrov, 2011) and model cascading (Chen et al., 2023), where the use of a faster proxy model reduces the use of the main model. In our multi-fidelity experiments, we find that smaller proxy scoring models distilled from the main model can assist BayesOpt+GP in finding high-scoring candidates earlier.
|
| 37 |
+
|
| 38 |
+
# 2 Background
|
| 39 |
+
|
| 40 |
+
# 2.1 Translation generation and reranking
|
| 41 |
+
|
| 42 |
+
In a typical machine translation setting, a conditional language model (LM) is trained to model the probability of the next token $y_{t}$ given a source sentence $x$ and previous tokens: $p(y_{t}|x,y_{1},\dots,y_{t - 1})$ . These probabilities can be autoregressively combined to model a sequence probability $p(y|x)$ . Usually, beam search is used to search for a $y$ which maximizes log probability combined with a length normalization objective (Wu et al., 2016).
|
| 43 |
+
|
| 44 |
+
In a basic list reranking setting, given $x$ , the LM is used to generate a candidate list $\mathcal{C}_x = [y_1,\dots,y_n]$ with a decoding algorithm such as beam search or ancestral sampling. A scoring function $s(x,y_i)$ is then applied to each $y_{i}\in \mathcal{C}_{x}$ , and the best scoring sequence $\arg \max_{y_i\in \mathcal{C}_x}s(x,y_i)$ is returned. A common choice of scoring function is a quality estimation (QE) model which directly predicts a scalar value representing the quality.
|
| 45 |
+
|
| 46 |
+
Reranking with high-quality evaluation metrics has been shown to be highly effective at improving translation output (Freitag et al., 2022), though it can skew results when the same metric is also used for evaluation (Kocmi et al., 2024a). Reranking performance improves as the number of candidates increases (Vernikos and Popescu-Belis, 2024) and when multiple scoring metrics are combined to form a stronger prediction (Fernandes et al., 2022).
|
| 47 |
+
|
| 48 |
+
Reranking adds significant computational costs to prediction and may be prohibitive to use at test time, but it can be used to benefit LM training instead of test time prediction; high-quality predictions obtained from reranking can be used for knowledge distillation (Wang et al., 2024) and self-training (Finkelstein et al., 2024). Such methods can improve the performance of an MT system without additional costs during test time.
|
| 49 |
+
|
| 50 |
+
Previous work on efficient reranking for MT is relatively limited. Fernandes et al. (2022) and Eikema and Aziz (2022) perform a two-stage reranking by first pruning with a faster and noisier
|
| 51 |
+
|
| 52 |
+
scoring function to a fixed size before evaluating the target score. There has been recent interest in efficient approximations for MBR (Cheng and Vlachos, 2023; Deguchi et al., 2024; Trabelsi et al., 2024; Vamvas and Sennrich, 2024), but these methods are not applicable to general scoring functions. (Singhal et al., 2023) propose to represent the candidate space compactly in a lattice over which a token-level reranker can efficiently score many candidates. In this work, we attempt to address a more general setting: the reranking of candidate lists with arbitrary black-box scoring functions.
|
| 53 |
+
|
| 54 |
+
# 2.2 Bayesian optimization with Gaussian process prior
|
| 55 |
+
|
| 56 |
+
Bayesian optimization is a sequential algorithm for optimizing a black-box function $f$ . $f$ is assumed to be drawn from a prior distribution over functions. The main loop of BayesOpt is as follows: given a set of (possibly noisy) observations of $f(a_{1}),\dots,f(a_{i})$ , the prior distribution over $f$ is updated to a posterior distribution with Bayes theorem. An acquisition function determines a query point $a_{i + 1}$ at which to evaluate $f$ next. $f(a_{i + 1})$ is evaluated and added to the set of observations. This repeats until a stopping criteria is reached. The principal design choices in BayesOpt are the prior distribution of $f$ and the acquisition function.
|
| 57 |
+
|
| 58 |
+
A common choice of prior is the Gaussian process, which assumes that any subset of points $f(a_{1}), \ldots, f(a_{i})$ are drawn jointly from a multivariate Gaussian distribution $\mathcal{N}(\mu, \mathcal{K})$ , where $\mathcal{K}$ is the covariance matrix defined by a kernel function such as the radial basis function kernel (RBF). RBFs define the covariance of two points $a$ and $a'$ as:
|
| 59 |
+
|
| 60 |
+
$$
|
| 61 |
+
\mathcal {K} _ {\mathrm {R B F}} (a, a ^ {\prime}) = \exp \left(- \frac {| | a - a ^ {\prime} | | ^ {2}}{2 w ^ {2}}\right), \quad (1)
|
| 62 |
+
$$
|
| 63 |
+
|
| 64 |
+
where $w$ is the bandwidth hyperparameter which determines scaling. The choice of kernel dictates prior assumptions about the shape of $f$ ; with RBF, points that are closer in Euclidean space have larger covariance. RBFs are a popular choice of kernel due their ability to adapt to complex nonlinear functions.
|
| 65 |
+
|
| 66 |
+
The assumption that $f(a_{1}), \ldots, f(a_{i})$ are jointly Gaussian gives rise to a convenient posterior distribution. Given a vector of observed data points $\mathbf{a}$ and their observed values $f(\mathbf{a})$ , the posterior mean $\mu_{a}$ and variance $\sigma_{a}$ of a point $a$ are given by the
|
| 67 |
+
|
| 68 |
+
conditional multivariate Gaussian distribution:
|
| 69 |
+
|
| 70 |
+
$$
|
| 71 |
+
\mu_ {a} = \mu + \mathcal {K} (a, \mathbf {a}) \left(\mathcal {K} (\mathbf {a}, \mathbf {a}) + \sigma^ {2} I\right) ^ {- 1} f (\mathbf {a}) \tag {2}
|
| 72 |
+
$$
|
| 73 |
+
|
| 74 |
+
$$
|
| 75 |
+
\begin{array}{l} \sigma_ {a} = \mathcal {K} (a, a) + \sigma^ {2} - \tag {3} \\ \mathcal {K} (a, \mathbf {a}) (\mathcal {K} (\mathbf {a}, \mathbf {a}) + \sigma^ {2} I) ^ {- 1} \mathcal {K} (\mathbf {a}, a) \\ \end{array}
|
| 76 |
+
$$
|
| 77 |
+
|
| 78 |
+
where $\mu$ is the unconditional mean of the distribution, $\sigma^2$ is a constant Gaussian noise on observations, $I$ is the identity matrix, and $\mathcal{K}$ here returns elementwise kernel values when given vector arguments.
|
| 79 |
+
|
| 80 |
+
The acquisition function in BayesOpt is the strategy for selecting the next point to evaluate in the optimization process. Acquisition functions can seek the highest expected improvement (EI; Mockus, 1974), an upper confidence bound if the scores are noisy (Srinivas et al., 2009), or information gain (Hennig and Schuler, 2011). We use EI, defined as:
|
| 81 |
+
|
| 82 |
+
$$
|
| 83 |
+
\alpha (a) = \mathbb {E} [ \max (f (a) - f (a ^ {+}), 0) ], \tag {4}
|
| 84 |
+
$$
|
| 85 |
+
|
| 86 |
+
where $a^+$ is the location of the current best observation. When $f$ is Gaussian and there is no observation noise, this has the following closed-form solution (Jones, 2001):
|
| 87 |
+
|
| 88 |
+
$$
|
| 89 |
+
\alpha (a) = \sigma_ {a} (z \cdot \operatorname {c d f} (z) + \operatorname {p d f} (z)), \tag {5}
|
| 90 |
+
$$
|
| 91 |
+
|
| 92 |
+
where $z = \frac{f(a^{+}) - \mu_{a}}{\sigma_{a}}$ , and cdf, pdf are the Gaussian cumulative distribution function and probability density function, respectively. EI encourages both exploration of uncertain points and exploitation of high-scoring points; the quantity in Equation 5 can be increased by increasing $\mu_{a}$ or $\sigma_{a}$ .
|
| 93 |
+
|
| 94 |
+
The generality of BayesOpt and modeling freedom enjoyed by GPs make them suitable for a great variety of tasks, including spatial monitoring (Krause et al., 2008) and hyperparameter optimisation (Bergstra et al., 2011). GPs have been applied to text regression tasks (Beck et al., 2013, 2014; Beck and Cohn, 2017), but they are not as well-studied in NLP compared to many other domains.
|
| 95 |
+
|
| 96 |
+
# 3 Methods
|
| 97 |
+
|
| 98 |
+
# 3.1 MT reranking with Bayesian optimization
|
| 99 |
+
|
| 100 |
+
Our main algorithm is an adaptation of BayesOpt with GPs as described in Section 2.2 to the reranking setting. Each source sentence $x$ and its associated candidate list is treated as a standalone BayesOpt problem, meaning that no observations are shared across different $x$ . Thus for brevity, we
|
| 101 |
+
|
| 102 |
+
Inputs: main metric $s$ , proxy metric $s'$ , budget $n$ for evaluating $s$ , hypotheses $\mathcal{C}$ , number of initial main scores $\alpha$ , number of initial proxy scores $\beta$ , scoring budget $n$ , batch size $k$ , precomputed multi-fidelity kernel $\mathcal{K}_{mult}$ .
|
| 103 |
+
|
| 104 |
+
Output: hypothesis with the highest observed score arg $\max_{y\in \mathcal{C}_{\mathrm{obs}}}s(y)$
|
| 105 |
+
|
| 106 |
+
1: $\mathcal{C}_{\mathrm{obs}}^{\prime}\gets \binom {\mathcal{C}}{\min (\beta ,|\mathcal{C}|)},\mathcal{C}_{\mathrm{obs}}\gets \binom {\mathcal{C}_{\mathrm{obs}}^{\prime}}{\min (\alpha ,|\mathcal{C}|)}$ ▷ Sample initial subsets
|
| 107 |
+
2: $S_{\mathrm{obs}}\gets \{s(y)|y\in \mathcal{C}_{\mathrm{obs}}\}$ ▷ Compute scores for main scoring function
|
| 108 |
+
3: $S_{\mathrm{obs}}^{\prime}\gets \{s^{\prime}(y)|y\in \mathcal{C}_{\mathrm{obs}}^{\prime}\}$ ▷ Compute proxy scores
|
| 109 |
+
4: while $|\mathcal{C}_{\mathrm{obs}}| < n$ and $|\mathcal{C}_{\mathrm{obs}}| < |\mathcal{C}|$ do
|
| 110 |
+
5: $\bar{\mathcal{C}}_{\mathrm{obs}}\gets \mathcal{C}\setminus \mathcal{C}_{\mathrm{obs}}$ ▷ Get complement of $\mathcal{C}_{\mathrm{obs}}$
|
| 111 |
+
6: $\hat{S}\leftarrow \operatorname {Norm}(S_{\mathrm{obs}}),\hat{S}^{\prime}\leftarrow \operatorname {Norm}(S_{\mathrm{obs}}^{\prime})$ ▷Normalize observed scores to 0 mean, 1 variance
|
| 112 |
+
7: $y_{\mathrm{best}}\gets \arg \max_{y\in \mathcal{C}_{\mathrm{obs}}}\hat{S} (y)$ ▷Get best observed point
|
| 113 |
+
8: $\forall y\in \bar{\mathcal{C}}_{\mathrm{obs}}:\mu_y,\sigma_y\gets$ calculate posterior using $y,\kappa_{\mathrm{mult}},\hat{S},\hat{S}^{\prime}$ ▷GP posterior as in Equations (2) and (3)
|
| 114 |
+
9: $\forall y\in \bar{\mathcal{C}}_{\mathrm{obs}}:\gamma_y\gets \operatorname {EI}(y_{\mathrm{best}},\mu_y,\sigma_y)$ ▷Expected improvement as in Equation (5)
|
| 115 |
+
10: $\mathcal{C}_{\mathrm{top - k}}\gets \arg \operatorname {topk}_{y\in \bar{\mathcal{C}}_{\mathrm{obs}}}\gamma_y$ ▷Select $k$ best hypotheses based on EI
|
| 116 |
+
11: $S_{\mathrm{obs}}\gets S_{\mathrm{obs}}\cup \{s(y)|y\in \mathcal{C}_{\mathrm{top - k}}\}$ ▷Compute scores for selected hypotheses
|
| 117 |
+
12: $C_\mathrm{obs}\gets C_\mathrm{obs}\cup C_\mathrm{top - k}$ ▷Update observed hypotheses
|
| 118 |
+
13: end while
|
| 119 |
+
14: return arg max $y\in \mathcal{C}_{\mathrm{obs}}$ s(y)
|
| 120 |
+
|
| 121 |
+
Algorithm 1: The BayesOpt+GP+P algorithm. BayesOpt+GP is a special case of this where $\beta = 0$ .
|
| 122 |
+
|
| 123 |
+
omit $x$ from notation when discussing BayesOpt for a particular instance.
|
| 124 |
+
|
| 125 |
+
Let $s$ be the scoring function, an MT quality estimator. Let $\mathcal{C}$ be a set of candidates, $\mathcal{C}_{\mathrm{obs}} \subseteq \mathcal{C}$ the subset of candidates for which we have observed $s(y)$ , and $\bar{\mathcal{C}}_{\mathrm{obs}}$ be all other $y$ ( $\bar{\mathcal{C}}_{\mathrm{obs}} = \mathcal{C} \setminus \mathcal{C}_{\mathrm{obs}}$ ). To perform reranking for an instance, we first generate candidates $\mathcal{C}$ and initialize the algorithm by scoring a random $\alpha$ -sized subset of the list with $s$ . In one iteration in the algorithm loop, we normalize the observed scores to mean 0 and 1 variance at every step and assume a 0 unconditional mean. Then we compute the GP posterior of all $y \in \bar{\mathcal{C}}_{\mathrm{obs}}$ with Equation 2 and 3 given the scores of $\mathcal{C}_{\mathrm{obs}}$ , which is then used to compute EI with Equation 5, assuming no observation noise. We score the $k$ candidates in $\bar{\mathcal{C}}_{\mathrm{obs}}$ with the highest EI, adding them to $\mathcal{C}_{\mathrm{obs}}$ (as well as removing them from $\bar{\mathcal{C}}_{\mathrm{obs}}$ ), and repeat the loop, terminating when a predefined budget of $n$ calls to $s$ is reached (or when all candidates have been evaluated, in the case that $|\mathcal{C}| \leq n$ ). Finally, we choose $\arg \max_{y \in \mathcal{C}_{\mathrm{obs}}} s(y)$ as the prediction.
|
| 126 |
+
|
| 127 |
+
We now describe our choice of GP kernel. $y\in \mathcal{C}$ are strings, and we seek a representation that is fast to compute and to compare, since $|\mathcal{C}|$ representations are generated, and the computing the GP covariance matrix requires $|\mathcal{C}|^2$ comparisons. Our kernel is $\kappa_{\mathrm{MT}}(y_i,y_j) = \kappa_{\mathrm{RBF}}(\mathrm{emb}(y_i),\mathrm{emb}(y_j))$
|
| 128 |
+
|
| 129 |
+
where emb returns the mean-pooled token-level outputs of the final decoder layer when generating $y$ , normalized to the unit norm after pooling. emb uses meaning representations produced automatically during candidate list generation, so the additional cost to compute it is negligible. Also, the covariance matrix is fast to compute given the candidate list sizes and embedding dimensionality used in our experiments.
|
| 130 |
+
|
| 131 |
+
# 3.2 Multi-fidelity BayesOpt
|
| 132 |
+
|
| 133 |
+
We also propose an extension to BayesOpt+GP for the setting where observations are available from a different but related proxy score function $s'$ . We refer to this as BayesOpt+GP+P. $s'$ is assumed to have non-trivial covariance with the scoring model $s$ and to be cheaper to evaluate. This is known as multi-fidelity BayesOpt in the literature, but while the multi-fidelity settings of Kandasamy et al. (2016); Wu et al. (2020) use acquisition functions that may choose to evaluate lower-fidelity scores, we study a simpler setting: $\beta$ observations of $s'$ are obtained at the start where $\beta > \alpha$ , and only $s$ may be evaluated during the BayesOpt loop. In the multi-fidelity setting, observations are made on $\langle y_i, s_i \rangle$ , a combination of a data point and scoring function, instead of the data point alone.
|
| 134 |
+
|
| 135 |
+
Our kernel for BayesOpt $^+$ GP $^+$ P is the product
|
| 136 |
+
|
| 137 |
+
of the RBF kernel from Section 3.1 and a kernel over score functions $f$ :
|
| 138 |
+
|
| 139 |
+
$$
|
| 140 |
+
\begin{array}{l} \mathcal {K} _ {\mathrm {m u l t}} \left(\langle y _ {i}, s _ {k} \rangle , \langle y _ {j}, s _ {l} \rangle\right) = \\ \mathcal {K} _ {\mathrm {M T}} \left(y _ {i}, y _ {j}\right) \mathcal {K} _ {\text {s c o r e}} \left(s _ {k}, s _ {l}\right). \tag {6} \\ \end{array}
|
| 141 |
+
$$
|
| 142 |
+
|
| 143 |
+
$\kappa_{\mathrm{mult}}$ is a valid kernel because a product of two kernels defined on different spaces is also a kernel (Rasmussen and Williams, 2005). With $\kappa_{\mathrm{mult}}$ , the covariance between two observations depends on both the difference between scoring functions and the distance between data points. This way, an observation influences the posterior for all other data points at all choices of scoring function, as long as the scoring functions are correlated. This formulation enables the use of any number of scoring functions, but in this work, we consider at most two: the main scorer $s$ and a proxy scorer $s'$ .
|
| 144 |
+
|
| 145 |
+
We set $\mathcal{K}_{\mathrm{score}}(s_k, s_l)$ to be the empirical covariance between $s_k$ and $s_l$ measured over a validation set, where all scores are normalized per-instance so that in each instance, the scores of all candidates for a particular scorer have 0 mean and 1 variance. Then for each scoring function, concatenate all candidate scores across instances, and compare the resulting lists to obtain the covariance. Covariance is a valid kernel because the covariance calculation can be expressed as a dot product, and dot products are valid kernels.
|
| 146 |
+
|
| 147 |
+
Proxy scores are incorporated into posterior estimation given by Equations 2 and 3 by redefining $a$ to be a tuple of (data point, scoring function) and $a$ to be a vector of such tuples. The kernel $\mathcal{K}$ is set to $\mathcal{K}_{\mathrm{mult}}$ which takes as input two tuples of data point and scoring function. The full BayesOpt+GP+P algorithm is in Algorithm 1.
|
| 148 |
+
|
| 149 |
+
# 3.3 Proxy scores
|
| 150 |
+
|
| 151 |
+
We train smaller scoring models to have high covariance with $s$ for use in BayesOpt+GP+P. In this work, our scoring functions are based on the Comet referenceless quality estimation architecture (Rei et al., 2020), also known as CometKiwi. These models encode the source and hypothesis jointly with a bidirectional transformer. Activations from all transformer layers are pooled to form a fixed-size representation, which is passed to a feed-forward regression head. The vast majority of computation in this models is spent in the encoder. Thus, faster Comet models can be obtained by reducing the size of the encoder.
|
| 152 |
+
|
| 153 |
+
We train Comet models using two differently sized pretrained multilingual encoder models in two ways: (1) training on the same training set as CometKiwi and (2) distillation. Among distillation methods, we attempt in preliminary experiments (1) training on the same training set as CometKiwi with ground truth scores replaced with CometKiwi scores and (2) training on a synthetic dataset comprising of LM samples along with their associated CometKiwi scores. The latter achieves higher correlation with CometKiwi on sampled candidates, which is to be expected since the training distribution is more suitable for the reranking use case. We therefore use this latter distillation method for all subsequent experiments. A similar procedure has been described in Rei et al. (2022a).
|
| 154 |
+
|
| 155 |
+
# 3.4 Candidate list generation
|
| 156 |
+
|
| 157 |
+
In preliminary experiments, we consider generating the candidate list using beam search with 128 outputs versus sampling 200 candidates using $\epsilon$ -sampling (Hewitt et al., 2022) with $\epsilon = 0.02$ , a setting which effectively balances quality and diversity for MBR (Freitag et al., 2023a). Under beam search, the candidates exhibit high lexical overlap, and while the mean score of candidates is higher, the average maximum score is lower. The effectiveness of truncated sampling over beam search in larger conditional language model has also been observed by Fernandes et al. (2022).
|
| 158 |
+
|
| 159 |
+
Furthermore, beam search suffers from out-of-memory errors on long translations, whereas with sampling, we simply reduce the batch size when out of memory. While it is possible to implement beam search in a batched manner, this does not exist in any popular conditional language model libraries, to the best of our knowledge.
|
| 160 |
+
|
| 161 |
+
For these reasons, we generate 200 candidates per instance with $\epsilon$ -sampling, $\epsilon = 0.02$ in all experiments. The sampled candidate list is then deduplicated, resulting in $\sim 178$ candidates on average per instance.
|
| 162 |
+
|
| 163 |
+
# 4 Experiments
|
| 164 |
+
|
| 165 |
+
We now discuss the details and findings of our Bayesian optimization experiments, followed by analysis of our trained proxy scoring models, concluding with runtime measurements. All run time values are measured on a A100-SXM4-40GB GPU. For exact values for figures in this section, see Table 3 in the Appendix. Appendix A contains
|
| 166 |
+
|
| 167 |
+

|
| 168 |
+
Figure 2: Left: Performance of reranking methods measured as the average CometKiwi score of the selected candidate. Beam search with beam size 5 achieves a score of 0.754 and is too low to be pictured here. Right: percentage of instances where the selected candidate had the highest score (right). The x-axis is the scoring budget. Legends show the normalized area under the curve of CometKiwi score of each method in brackets.
|
| 169 |
+
|
| 170 |
+

|
| 171 |
+
|
| 172 |
+
extensive statistical significance tests.
|
| 173 |
+
|
| 174 |
+
For BayesOpt experiments, we grid search for the optimal value of RBF bandwidth parameter $w$ on the entire validation set, setting scoring budget $n = 100$ and batch size $k = 1$ . While it is possible to optimize it for every unique combination of language pair, $n, k$ , proxy scoring function, and $\beta$ , we find that the results are not statistically significantly different within a range of settings. For simplicity, and to demonstrate the robustness of our methods, we use the same $w$ for all experiments.
|
| 175 |
+
|
| 176 |
+
In all experiments, we use $\alpha = 10$ initial randomly scored candidates. We set $k = 1$ in Sections 4.2 and 4.3 to demonstrate the effectiveness of BayesOpt+GP under ideal conditions, but since $k$ can have a large impact on speed, we experiment with varying it in Section 4.5.
|
| 177 |
+
|
| 178 |
+
# 4.1 Models and datasets
|
| 179 |
+
|
| 180 |
+
For candidate generation, we use the 600M-parameter distilled NLLB model (Team et al., 2022) in all experiments. For the main scoring model, we use CometKiwi-22 (Rei et al., 2022b).
|
| 181 |
+
|
| 182 |
+
As a dataset used for proxy model training, we use data from the WMT Metrics Shared Task up to 2022 (Freitag et al., 2023b), which contains tuples of <source, hypothesis, human score>. The human scores were largely collected with the DA+SQM annotation protocol (Kocmi et al., 2022).
|
| 183 |
+
|
| 184 |
+
For BayesOpt experiments, we select the first 1000 and 500 source sentences per language pair from the WMT23 Metrics Shared Task dataset as the validation and test set, respectively, for 7 language pairs: English-Czech, English-German,
|
| 185 |
+
|
| 186 |
+
English-Japanese, English-Chinese, and the reverse directions of the latter 3 pairs.
|
| 187 |
+
|
| 188 |
+
CometKiwi is based on the encoder of XLM-Robertalarge (Conneau et al., 2019) (2.2GB memory). For proxy scorers we train smaller models based on XLM-Robertabase (1.1GB), and Multilingual-MiniLM-L12-H384 (Wang et al., 2020) (469MB).
|
| 189 |
+
|
| 190 |
+
# 4.2 BayesOpt+GP
|
| 191 |
+
|
| 192 |
+
The goal of reranking BayesOpt+GP is to improve the speed by only evaluating a subset of available candidates. We evaluate this through quality-cost tradeoff curves, where quality is determined by final selected candidate's CometKiwi score, and cost is determined by the number of calls to the scoring function. As another measure of approximation quality, we also show the percentage of instances in which the actual best scoring candidate is returned. We devise several baselines with which to compare BayesOpt+GP. Each is a strategy for selecting a subset of candidates to score from which the best scoring candidate is returned. The baselines are:
|
| 193 |
+
|
| 194 |
+
- UniqRandom: Shuffle the candidate list before de-duplication, then de-duplicate while preserving the order of the first appearance of each candidate. Select the first $\min (n,|\mathcal{C}|)$ candidates in the resulting list.
|
| 195 |
+
- Logprob{Avg, Sum}: Sort $\mathcal{C}$ in order of negative sequence log probability (either average or sum), and then select the first $\min(n, |\mathcal{C}|)$ .
|
| 196 |
+
- HillClimbing: Let $y^{+}$ be the highest scoring observation point at any time step. Iteratively select $\arg \min_{y \in \bar{C}_{\mathrm{obs}}} ||\mathrm{emb}(y) - \mathrm{emb}(y^{+})||$ as the next
|
| 197 |
+
|
| 198 |
+

|
| 199 |
+
Figure 3: Average CometKiwi score of the selected top candidate (y-axis) for BayesOpt+GP+P with Distilled-S (left) and Distilled-M (right) compared to the ProxyFirst baseline. This figure disregards the additional compute costs for these proxy metrics in order to show the marginal score increase from proxy observations.
|
| 200 |
+
|
| 201 |
+

|
| 202 |
+
|
| 203 |
+

|
| 204 |
+
Figure 4: Average CometKiwi score of the selected candidate (y-axis) for BayesOpt+GP+P with different choices of proxy score.
|
| 205 |
+
|
| 206 |
+
observation point until $\min (n,|\mathcal{C}|)$ candidates are scored.
|
| 207 |
+
|
| 208 |
+
UniqRandom simulates the effect of iteratively sampling candidates until $n$ unique candidates are obtained. LogprobFirst{Avg, Sum} are included to verify whether more advanced methods indeed outperform simple subset selection using statistics obtained for free. HillClimbing is a heuristic iterative selection strategy which, like BayesOpt, is black-box and derivative-free (Conn et al., 2009).
|
| 209 |
+
|
| 210 |
+
In Figure 2, BayesOpt+GP outperforms all baselines, and HillClimbing is the best among the baselines, with LogprobAvg following behind. LogprobSum severely underperforms UniqRandom in score, confirming findings on the inadequacy of very high probability translations (Eikema and
|
| 211 |
+
|
| 212 |
+
Aziz, 2020). Informally speaking, UniqRandom is a simple "exploration" strategy that ignores existing observations, while HillClimbing is a simple "exploitation" strategy, only searching over neighbors nearest the best observation while ignoring the full search space. These results confirm that balancing these respective deciderata helps to find the optimal candidate more efficiently.
|
| 213 |
+
|
| 214 |
+
# 4.3 BayesOpt+GP+P
|
| 215 |
+
|
| 216 |
+
# 4.3.1 Proxy score evaluation
|
| 217 |
+
|
| 218 |
+
We first evaluate trained proxy scorers independently of their use in BayesOpt according to (1) actual runtime, (2) correlation with human ratings in the WMT23 dataset, (3) correlation with CometKiwi on source-hypothesis pairs in WMT23, and (4) correlation with CometKiwi on a synthetic candidates for an instance, averaged over instances. For correlations we use Kendall's $\tau_{c}$ , which is commonly used in MT metric evaluation (Freitag et al., 2023b).
|
| 219 |
+
|
| 220 |
+
Table 1 shows the results for the proxy models. The model size corresponds closely to inference time. As desired, training proxies using distillation results in much higher correlation with CometKiwi, although it loses some correlation with human judgments. In subsequent experiments, we consider Distilled- $\{\mathbf{S}, \mathbf{M}\}$ only. While LogprobAvg has comparatively much lower correlation, we nevertheless consider it as a proxy score since it is obtained for free during candidate generation.
|
| 221 |
+
|
| 222 |
+
<table><tr><td>Model</td><td>Time</td><td>Human Test</td><td>CometKiwi Test</td><td>Cands.</td></tr><tr><td>CometKiwi</td><td>51.38s</td><td>0.245</td><td>1.000</td><td>1.000</td></tr><tr><td>LogprobsAvg</td><td>0.00s</td><td>-</td><td>-</td><td>0.191</td></tr><tr><td>LogprobsSum</td><td>0.00s</td><td>-</td><td>-</td><td>-0.090</td></tr><tr><td>Authentic-S</td><td>7.13s</td><td>0.193</td><td>0.314</td><td>0.350</td></tr><tr><td>Authentic-M</td><td>18.71s</td><td>0.199</td><td>0.320</td><td>0.448</td></tr><tr><td>Distilled-S</td><td>7.13s</td><td>0.169</td><td>0.488</td><td>0.620</td></tr><tr><td>Distilled-M</td><td>18.71s</td><td>0.188</td><td>0.572</td><td>0.680</td></tr></table>
|
| 223 |
+
|
| 224 |
+
Table 1: Benchmarking proxy models (Section 3.3) on speed and correlation with human judgments/CometKiwi using the WMT23 dataset. Speed is measured by runtime per 10000 samples using maximum batch size. Correlation is measured with Kendall's $\tau_{c}$ against human judgments and CometKiwi scores. CometKiwi correlation is taken over the provided targets in WMT23 (Test) and a synthetic dataset comprised of 200 samples per source sentence, deduped (Cands). Logprobs{Avg,Sum} is not evaluated on WMT23 targets because they are generated by other MT systems.
|
| 225 |
+
|
| 226 |
+
# 4.3.2 Reranking results
|
| 227 |
+
|
| 228 |
+
When $s'$ is sufficiently fast and correlated with $s$ , it can further improve the quality-cost tradeoff in BayesOpt+GP. Recall that BayesOpt+GP+P initializes with $\beta$ evaluations of $s'$ . Figure 4 shows the quality-cost curve when all proxy scores are known, or $\beta = 200$ . The relative performance when including proxy scores correspond to their correlation with CometKiwi as shown in Table 1; Distilled-M outperforms Distilled-S, and both outperform LogprobAvg. This demonstrates the importance of ensuring high correlation in the proxy score. The addition of LogprobAvg to BayesOpt+GP has little effect, showing that poorly correlated proxies are too noisy to help and may even hinder performance. Beyond $n = 70$ , all methods achieve close to the maximum attainable score.
|
| 229 |
+
|
| 230 |
+
We also examine the effect of initializing with a fraction of proxy observations rather than all of them. For some choice of $\beta$ , an appropriate baseline is to rank the top- $n$ candidates among the $\beta$ observed proxy scores. We call this ProxyFirst. The results when using Distilled-M and Distilled-S as proxies are shown in Figure 3. In both cases, the difference between BayesOpt+GP+P and ProxyFirst is smaller when $\beta = 200$ than when $\beta = 50$ , and this gap is smaller for Distilled-M. This is to be expected because as the covariance of $s$ and $s'$ increases, using ProxyFirst with $\beta = 200$ approaches
|
| 231 |
+
|
| 232 |
+
standard full-list reranking. The marginal benefit of BayesOpt+GP+P is more clear when $\beta = 50$ , where proxy scores help to find promising candidates earlier.
|
| 233 |
+
|
| 234 |
+
Overall, proxy observations can indeed improve quality for a particular $n$ . However, for sufficiently large $n$ , BayesOpt+GP converges, so proxy observations are unnecessary. Proxy evaluations add to the runtime cost which we discuss in Section 4.4. Therefore, while we show that the multi-fidelity kernel is capable of leveraging proxy scores to improve search, in practice, the overall computational budget should be considered along with the quality and cost of the proxy scoring function to ensure that using the method is worthwhile.
|
| 235 |
+
|
| 236 |
+
# 4.4 Runtime
|
| 237 |
+
|
| 238 |
+
Our reranking algorithm significantly reduces actual runtime compared to scoring all candidates for a source sentence. We profile the full pipeline, from generating candidates to making a final selection, on three settings: (1) BayesOpt+GP with $n = 90$ , and (2) multi-fidelity BayesOpt+GP with 50 Distilled-S scores and $n = 70$ , and 3) the baseline of evaluating CometKiwi on all candidates. $n, \beta$ are selected to balance the final scores of the two algorithms (0.8213 and 0.8211 respectively, as shown in Table 3).
|
| 239 |
+
|
| 240 |
+
For the runtime calculations, we select 50 source sentences from each language pair and generate 200 candidates for each. For the baseline, we compute scores for all candidates with a batch size of 200. For BayesOpt+GP methods, we profile the additional steps required: computing the kernel, computing the posteriors at each step, and evaluating proxy scores. BayesOpt+GP(+S) uses batch size $k = 10$ , which does not affect scores compared to using $k = 1$ (see Section 4.5). Memory bandwidth can be a major overhead in large neural networks, making it inefficient to run small batches. Since BayesOpt+GP obtains $k$ candidates per step, in order to use large batches, we process candidates for multiple instances in parallel.
|
| 241 |
+
|
| 242 |
+
Results are shown in Table 2. In all cases, candidate generation and CometKiwi calculations dominate the overall runtime. The extra cost from BayesOpt-related computations is compensated by the savings from reducing CometKiwi evaluations, despite similarity matrix computation being $\mathcal{O}(|\mathcal{C}|^2)$ , and matrix inversion for posteriori calculation at each iteration being $\mathcal{O}(|\mathcal{C}|^3)$ .
|
| 243 |
+
|
| 244 |
+
BayesOpt+GP+P with Distilled-S reduces the runtime by further reducing the number of CometKiwi calculations to 70, with the cost of loading and running the Distilled-S proxy metric introducing minimal overhead.
|
| 245 |
+
|
| 246 |
+
<table><tr><td>Operation</td><td>AllComet</td><td>BayesOpt +GP n = 90</td><td>BayesOpt +GP+P n = 70,β = 50</td></tr><tr><td>Candidates</td><td>701.38</td><td>701.38</td><td>701.38</td></tr><tr><td>Similarities</td><td>-</td><td>1.24</td><td>1.24</td></tr><tr><td>BayesOpt+GP</td><td>-</td><td>1.92</td><td>2.25</td></tr><tr><td>Comet Loading</td><td>8.43</td><td>8.43</td><td>11.27</td></tr><tr><td>Distilled-S</td><td>-</td><td>-</td><td>11.11</td></tr><tr><td>CometKiwi</td><td>274.87</td><td>188.39</td><td>146.33</td></tr><tr><td>Total</td><td>984.68</td><td>901.36</td><td>873.58</td></tr></table>
|
| 247 |
+
|
| 248 |
+
Table 2: Runtimes for the full reranking baseline (AllComet), BayesOpt+GP, and BayesOpt+GP+P with Distilled-S as proxy score at settings where CometKiwi scores are roughly equal. Time given in seconds per 350 instances.
|
| 249 |
+
|
| 250 |
+
# 4.5 Batch size $k$ in BayesOpt+GP
|
| 251 |
+
|
| 252 |
+
We examine the effect of batch size $k$ in BayesOpt+GP for $k = 1,2,5,10$ . Figure 5 shows that as expected, larger $k$ diminishes performance, although the differences nearly vanish at $n > 70$ .
|
| 253 |
+
|
| 254 |
+
$k$ impacts how often the BayesOpt loop is run and thus has a large effect on speed. Fortunately, we observe for sufficiently large $n$ , $k$ can be increased without sacrificing quality.
|
| 255 |
+
|
| 256 |
+

|
| 257 |
+
Figure 5: Difference between BayesOpt+GP with batch size of 1 (top line in red in Figure 2) and BayesOpt+GP with higher batch sizes. Negative values mean that higher batch size performed worse than BayesOpt+GP with batch size of 1.
|
| 258 |
+
|
| 259 |
+
# 5 Conclusion
|
| 260 |
+
|
| 261 |
+
In this work, we formalize MT reranking as a Bayesian optimization problem, leveraging the ba
|
| 262 |
+
|
| 263 |
+
sic observation that similar translations are more likely to have similar quality scores. We also extend the framework to accept observations from proxy scoring functions, which is applicable when the target score is very costly: large QE models, MBR, or human evaluation. In realistic experiments, we show that our methods improve reranking efficiency over strong baselines. We also propose several design choices that make the methods useful in practice; a GP kernel that requires minimal overhead, and effective proxy model training via distillation.
|
| 264 |
+
|
| 265 |
+
We consider our work a first step in applying BayesOpt to MT reranking. Future directions include integrating BayesOpt with candidate generation, alternative acquisition functions, and further exploration of GP kernels for MT.
|
| 266 |
+
|
| 267 |
+
# 6 Limitations
|
| 268 |
+
|
| 269 |
+
The optimization problem considered in this work is to maximize score from a scoring model. We show that BayesOpt is an effective optimizer, but we do not explore to what extent the optimization problem is flawed due to flaws in the scoring model. We refer to Kocmi et al. (2024b) to understand what magnitude of score difference between systems is significant. However, the existence of "metric overfitting" when directly optimizing an evaluation metric is debated and may affect the interpretation of score differences (Fernandes et al., 2022; Wang et al., 2024).
|
| 270 |
+
|
| 271 |
+
BayesOpt+GP requires matrix inversion, a $\mathcal{O}(|\mathcal{C}|^3)$ operation that is performed once per iteration. While it is inexpensive for the $|\mathcal{C}|$ we consider, this limits the number of observations that can be used for posterior computation without resorting to approximations (Noack et al., 2023).
|
| 272 |
+
|
| 273 |
+
As an iterative algorithm, BayesOpt can score no more than $k$ candidates in a batch for a single instance. Small batch sizes introduce a significant bottleneck for large neural networks, so in order to maintain large batch sizes, we propose processing multiple instances in parallel. However, this requires additional engineering.
|
| 274 |
+
|
| 275 |
+
# Acknowledgements
|
| 276 |
+
|
| 277 |
+
Julius Cheng is supported by a scholarship from Huawei. Part of this work received support from the European Union's Horizon research and innovation programme under grant agreement No 101135798, project Meetween (My Personal AI
|
| 278 |
+
|
| 279 |
+
Mediator for Virtual MEETtings BetWEEN People. We thank the organizers of MT Marathon 2024, where the authors met and this work was conceived. We also thank Beni Egressy for useful discussions and Will Tebbutt for lending expertise on GPs.
|
| 280 |
+
|
| 281 |
+
# References
|
| 282 |
+
|
| 283 |
+
Daniel Beck and Trevor Cohn. 2017. Learning kernels over strings using Gaussian processes. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers), 67-73. Asian Federation of Natural Language Processing.
|
| 284 |
+
Daniel Beck, Trevor Cohn, and Lucia Specia. 2014. Joint emotion analysis via multi-task Gaussian processes. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), 1798-1803. Association for Computational Linguistics.
|
| 285 |
+
Daniel Beck, Kashif Shah, Trevor Cohn, and Lucia Specia. 2013. SHEF-Lite: When less is more for translation quality estimation. In Proceedings of the Eighth Workshop on Statistical Machine Translation, 337-342. Association for Computational Linguistics.
|
| 286 |
+
James Bergstra, Rémi Bardenet, Yoshua Bengio, and Balázs Kégl. 2011. Algorithms for hyper-parameter optimization. In Advances in Neural Information Processing Systems, volume 24. Curran Associates, Inc.
|
| 287 |
+
Sumanta Bhattacharyya, Amirmohammad Rooshenas, Subhajit Naskar, Simeng Sun, Mohit Iyyer, and Andrew McCallum. 2021. Energy-based reranking: Improving neural machine translation using energy-based models. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), 4528-4537. Association for Computational Linguistics.
|
| 288 |
+
Eugene Charniak and Mark Johnson. 2005. Coarse-to-fine n-best parsing and MaxEnt discriminative reranking. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL '05), 173-180. Association for Computational Linguistics.
|
| 289 |
+
Lingjiao Chen, Matei Zaharia, and James Zou. 2023. Frugalgpt: How to use large language models while reducing cost and improving performance.
|
| 290 |
+
Julius Cheng and Andreas Vlachos. 2023. Faster minimum Bayes risk decoding with confidence-based pruning. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, 12473-12480. Association for Computational Linguistics.
|
| 291 |
+
|
| 292 |
+
Michael Collins and Terry Koo. 2005. Discriminative reranking for natural language parsing. Computational Linguistics, 31(1):25-70.
|
| 293 |
+
Andrew R. Conn, Katya Scheinberg, and Luis N. Vicente. 2009. Introduction to Derivative-Free Optimization. Society for Industrial and Applied Mathematics.
|
| 294 |
+
Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishray Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettle-moyer, and Veselin Stoyanov. 2019. Unsupervised cross-lingual representation learning at scale. CoRR, abs/1911.02116.
|
| 295 |
+
Hiroyuki Deguchi, Yusuke Sakai, Hidetaka Kamigaito, Taro Watanabe, Hideki Tanaka, and Masao Utiyama. 2024. Centroid-based efficient minimum Bayes risk decoding. In Findings of the Association for Computational Linguistics ACL 2024, 11009-11018, Bangkok, Thailand and virtual meeting. Association for Computational Linguistics.
|
| 296 |
+
Bryan Eikema and Wilker Aziz. 2020. Is MAP decoding all you need? the inadequacy of the mode in neural machine translation. In Proceedings of the 28th International Conference on Computational Linguistics, 4506-4520. International Committee on Computational Linguistics.
|
| 297 |
+
Bryan Eikema and Wilker Aziz. 2022. Sampling-based approximations to minimum Bayes risk decoding for neural machine translation. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, 10978-10993. Association for Computational Linguistics.
|
| 298 |
+
Patrick Fernandes, Antonio Farinhas, Ricardo Rei, José G. C. de Souza, Perez Ogayo, Graham Neubig, and Andre Martins. 2022. Quality-aware decoding for neural machine translation. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 1396-1412. Association for Computational Linguistics.
|
| 299 |
+
Mara Finkelstein, Subhajit Naskar, Mehdi Mirzazadeh, Apurva Shah, and Markus Freitag. 2024. MBR and QE finetuning: Training-time distillation of the best and most expensive decoding methods.
|
| 300 |
+
Markus Freitag, Behrooz Ghorbani, and Patrick Fernandes. 2023a. Epsilon sampling rocks: Investigating sampling strategies for minimum Bayes risk decoding for machine translation. In *Findings of the Association for Computational Linguistics: EMNLP* 2023, 9198–9209. Association for Computational Linguistics.
|
| 301 |
+
Markus Freitag, David Grangier, Qijun Tan, and Bowen Liang. 2022. High quality rather than high model probability: Minimum Bayes risk decoding with neural metrics. Transactions of the Association for Computational Linguistics, 10:811-825.
|
| 302 |
+
|
| 303 |
+
Markus Freitag, Nitika Mathur, Chi-kiu Lo, Eleftherios Avramidis, Ricardo Rei, Brian Thompson, Tom Kocmi, Frederic Blain, Daniel Deutsch, Craig Stewart, Chrysoula Zerva, Sheila Castilho, Alon Lavie, and George Foster. 2023b. Results of WMT23 metrics shared task: Metrics might be guilty but references are not innocent. In Proceedings of the Eighth Conference on Machine Translation, 578-628. Association for Computational Linguistics.
|
| 304 |
+
Philipp Hennig and Christian J. Schuler. 2011. Entropy search for information-efficient global optimization. ArXiv, abs/1112.1217.
|
| 305 |
+
John Hewitt, Christopher Manning, and Percy Liang. 2022. Truncation sampling as language model desmoothing. In *Findings of the Association for Computational Linguistics: EMNLP* 2022, 3414-3427. Association for Computational Linguistics.
|
| 306 |
+
Donald R. Jones. 2001. A taxonomy of global optimization methods based on response surfaces. Journal of Global Optimization, 21:345-383.
|
| 307 |
+
Juraj Juraska, Mara Finkelstein, Daniel Deutsch, Aditya Siddhant, Mehdi Mirzazadeh, and Markus Freitag. 2023. MetricX-23: The Google submission to the WMT 2023 metrics shared task. In Proceedings of the Eighth Conference on Machine Translation, 756–767. Association for Computational Linguistics.
|
| 308 |
+
Kirthevasan Kandasamy, Gautam Dasarathy, Junier B Oliva, Jeff Schneider, and Barnabas Poczos. 2016. Gaussian process bandit optimisation with multifidelity evaluations. In Advances in Neural Information Processing Systems, volume 29. Curran Associates, Inc.
|
| 309 |
+
Tom Kocmi, Eleftherios Avramidis, Rachel Bawden, Ondrej Bojar, Anton Dvorkovich, Christian Federmann, Mark Fishel, Markus Freitag, Thamme Gowda, Roman Grundkiewicz, Barry Haddow, Marzena Karpinska, Philipp Koehn, Benjamin Marie, Christof Monz, Kenton Murray, Masaaki Nagata, Martin Popel, Maja Popovic, Mariya Shmatova, Steinthor Steingrímsson, and Vilém Zouhar. 2024a. Findings of the WMT24 general machine translation shared task: The LLM era is here but MT is not solved yet. In Proceedings of the Ninth Conference on Machine Translation, 1-46, Miami, Florida, USA. Association for Computational Linguistics.
|
| 310 |
+
Tom Kocmi, Rachel Bawden, Ondrej Bojar, Anton Dvorkovich, Christian Federmann, Mark Fishel, Thamme Gowda, Yvette Graham, Roman Grundkiewicz, Barry Haddow, Rebecca Knowles, Philipp Koehn, Christof Monz, Makoto Morishita, Masaaki Nagata, Toshiaki Nakazawa, Michal Novak, Martin Popel, and Maja Popovic. 2022. Findings of the 2022 conference on machine translation (WMT22). In Proceedings of the Seventh Conference on Machine Translation (WMT), 1-45. Association for Computational Linguistics.
|
| 311 |
+
|
| 312 |
+
Tom Kocmi, Vilém Zouhar, Christian Federmann, and Matt Post. 2024b. Navigating the metrics maze: Reconciling score magnitudes and accuracies. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 1999-2014, Bangkok, Thailand. Association for Computational Linguistics.
|
| 313 |
+
Andreas Krause, Ajit Singh, and Carlos Guestrin. 2008. Near-optimal sensor placements in gaussian processes: Theory, efficient algorithms and empirical studies. J. Mach. Learn. Res., 9:235-284.
|
| 314 |
+
Ann Lee, Michael Auli, and Marc'Aurelio Ranzato. 2021. Discriminative reranking for neural machine translation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume I: Long Papers), 7250-7264. Association for Computational Linguistics.
|
| 315 |
+
Jonas Mockus. 1974. On bayesian methods for seeking the extremum. In Proceedings of the IFIP Technical Conference, 400-404.
|
| 316 |
+
Marcus M. Noack, Harinarayan Krishnan, Mark D. Risser, and Kristofer G. Reyes. 2023. Exact gaussian processes for massive datasets via non-stationary sparsity-discovering kernels. Scientific Reports, 13(1).
|
| 317 |
+
Slav Petrov. 2011. Coarse-to-Fine Natural Language Processing (Theory and Applications of Natural Language Processing). Springer Publishing Company, Incorporated.
|
| 318 |
+
C.E. Rasmussen and C.K.I. Williams. 2005. Gaussian Processes for Machine Learning. Adaptive Computation and Machine Learning series. MIT Press.
|
| 319 |
+
Mathieu Ravaut, Shafiq Joty, and Nancy Chen. 2022. SummaReranker: A multi-task mixture-of-experts re-ranking framework for abstractive summarization. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 4504-4524. Association for Computational Linguistics.
|
| 320 |
+
Ricardo Rei, Ana C Farinha, José G.C. de Souza, Pedro G. Ramos, André F.T. Martins, Luisa Coheur, and Alon Lavie. 2022a. Searching for COMETINHO: The little metric that could. In Proceedings of the 23rd Annual Conference of the European Association for Machine Translation, 61-70. European Association for Machine Translation.
|
| 321 |
+
Ricardo Rei, Jose Pombal, Nuno M. Guerreiro, João Alves, Pedro Henrique Martins, Patrick Fernandes, Helena Wu, Tania Vaz, Duarte Alves, Amin Farajian, Sweta Agrawal, Antonio Farinhas, José G. C. De Souza, and André Martins. 2024. Tower v2: Unbabel-IST 2024 submission for the general MT shared task. In Proceedings of the Ninth Conference on Machine Translation, 185–204, Miami, Florida, USA. Association for Computational Linguistics.
|
| 322 |
+
|
| 323 |
+
Ricardo Rei, Craig Stewart, Ana C Farinha, and Alon Lavie. 2020. COMET: A neural framework for MT evaluation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2685-2702. Association for Computational Linguistics.
|
| 324 |
+
Ricardo Rei, Marcos Treviso, Nuno M. Guerreiro, Chrysoula Zerva, Ana C Farinha, Christine Maroti, José G. C. de Souza, Taisiya Glushkova, Duarte Alves, Luisa Coheur, Alon Lavie, and André F. T. Martins. 2022b. CometKiwi: IST-unbabel 2022 submission for the quality estimation shared task. In Proceedings of the Seventh Conference on Machine Translation (WMT), 634-645. Association for Computational Linguistics.
|
| 325 |
+
Thibault Sellam, Dipanjan Das, and Ankur Parikh. 2020. BLEURT: Learning robust metrics for text generation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 7881-7892. Association for Computational Linguistics.
|
| 326 |
+
Bobak Shahriari, Kevin Swersky, Ziyun Wang, Ryan P. Adams, and Nando de Freitas. 2016. Taking the human out of the loop: A review of bayesian optimization. Proceedings of the IEEE, 104:148-175.
|
| 327 |
+
Prasann Singhal, Jiacheng Xu, Xi Ye, and Greg Durrett. 2023. EEL: Efficiently encoding lattices for reranking. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 9299-9316. Association for Computational Linguistics.
|
| 328 |
+
Niranjan Srinivas, Andreas Krause, Sham M. Kakade, and Matthias W. Seeger. 2009. Information-theoretic regret bounds for gaussian process optimization in the bandit setting. IEEE Transactions on Information Theory, 58:3250-3265.
|
| 329 |
+
NLLB Team, Marta R. Costa-jussà, James Cross, Onur Celebi, Maha Elbayad, Kenneth Heafield, Kevin Heffernan, Elahe Kalbassi, Janice Lam, Daniel Licht, Jean Maillard, Anna Sun, Skyler Wang, Guillaume Wenzek, Al Youngblood, Bapi Akula, Loic Barrault, Gabriel Mejia Gonzalez, Prangthip Hansanti, John Hoffman, Semarley Jarrett, Kaushik Ram Sadagopan, Dirk Rowe, Shannon Spruit, Chau Tran, Pierre Andrews, Necip Fazil Ayan, Shruti Bhosale, Sergey Edunov, Angela Fan, Cynthia Gao, Vedanuj Goswami, Francisco Guzmán, Philipp Koehn, Alexandre Mourachko, Christophe Ropers, Safiyyah Saleem, Holger Schwenk, and Jeff Wang. 2022. No language left behind: Scaling human-centered machine translation.
|
| 330 |
+
Firas Trabelsi, David Vilar, Mara Finkelstein, and Markus Freitag. 2024. Efficient minimum bayes risk decoding using low-rank matrix completion algorithms.
|
| 331 |
+
Jannis Vamvas and Rico Sennrich. 2024. Linear-time minimum Bayes risk decoding with reference aggregation. In Proceedings of the 62nd Annual Meeting
|
| 332 |
+
|
| 333 |
+
of the Association for Computational Linguistics (Volume 2: Short Papers), 790-801, Bangkok, Thailand. Association for Computational Linguistics.
|
| 334 |
+
Giorgos Vernikos and Andrei Popescu-Belis. 2024. Don't rank, combine! combining machine translation hypotheses using quality estimation. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 12087-12105, Bangkok, Thailand. Association for Computational Linguistics.
|
| 335 |
+
Jun Wang, Eleftheria Briakou, Hamid Dadkhahi, Rishabh Agarwal, Colin Cherry, and Trevor Cohn. 2024. Don't throw away data: Better sequence knowledge distillation.
|
| 336 |
+
Wenhui Wang, Furu Wei, Li Dong, Hangbo Bao, Nan Yang, and Ming Zhou. 2020. MiniLM: Deep self-attention distillation for task-agnostic compression of pre-trained transformers.
|
| 337 |
+
Jian Wu, Saul Toscano-Palmerin, Peter I. Frazier, and Andrew Gordon Wilson. 2020. Practical multifidelity bayesian optimization for hyperparameter tuning. In Proceedings of The 35th Uncertainty in Artificial Intelligence Conference, volume 115 of Proceedings of Machine Learning Research, 788-798. PMLR.
|
| 338 |
+
Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Lukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyls, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2016. Google's neural machine translation system: Bridging the gap between human and machine translation.
|
| 339 |
+
Kyra Yee, Yann Dauphin, and Michael Auli. 2019. Simple and effective noisy channel modeling for neural machine translation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), 5696-5701. Association for Computational Linguistics.
|
| 340 |
+
Chrysoula Zerva, Frédéric Blain, Ricardo Rei, Piyawat Lertvittayakumjorn, José G. C. de Souza, Steffen Eger, Diptesh Kanojia, Duarte Alves, Constantin Orasan, Marina Fomicheva, André F. T. Martins, and Lucia Specia. 2022. Findings of the WMT 2022 shared task on quality estimation. In Proceedings of the Seventh Conference on Machine Translation (WMT), 69-99. Association for Computational Linguistics.
|
| 341 |
+
|
| 342 |
+
<table><tr><td rowspan="2">Method</td><td rowspan="2">Figure</td><td rowspan="2">10</td><td rowspan="2">20</td><td rowspan="2">30</td><td rowspan="2">40</td><td colspan="6">CometKiwi runs</td></tr><tr><td>50</td><td>60</td><td>70</td><td>80</td><td>90</td><td>100</td></tr><tr><td>UniqRandom</td><td>2</td><td>0.7917</td><td>0.8022</td><td>0.8074</td><td>0.8104</td><td>0.8124</td><td>0.8140</td><td>0.8149</td><td>0.8160</td><td>0.8168</td><td>0.8175</td></tr><tr><td>LogprobAvg</td><td>2</td><td>0.7956</td><td>0.8055</td><td>0.8101</td><td>0.8129</td><td>0.8149</td><td>0.8162</td><td>0.8171</td><td>0.8181</td><td>0.8187</td><td>0.8193</td></tr><tr><td>LogprobSum</td><td>2</td><td>0.7519</td><td>0.7723</td><td>0.7834</td><td>0.7913</td><td>0.7974</td><td>0.8019</td><td>0.8051</td><td>0.8081</td><td>0.8109</td><td>0.8125</td></tr><tr><td>HillClimbing</td><td>2</td><td>0.7917</td><td>0.8080</td><td>0.8124</td><td>0.8148</td><td>0.8165</td><td>0.8176</td><td>0.8184</td><td>0.8191</td><td>0.8196</td><td>0.8200</td></tr><tr><td>ProxyFirst 200 Distilled-S</td><td>3</td><td>0.8081</td><td>0.8141</td><td>0.8167</td><td>0.8181</td><td>0.8190</td><td>0.8197</td><td>0.8202</td><td>0.8206</td><td>0.8208</td><td>0.8210</td></tr><tr><td>ProxyFirst 200 Distilled-M</td><td>3</td><td>0.8119</td><td>0.8165</td><td>0.8184</td><td>0.8194</td><td>0.8201</td><td>0.8206</td><td>0.8209</td><td>0.8211</td><td>0.8212</td><td>0.8213</td></tr><tr><td>ProxyFirst 50 Distilled-S</td><td>3</td><td>0.8054</td><td>0.8100</td><td>0.8114</td><td>0.8121</td><td>0.8124</td><td>-</td><td>-</td><td>-</td><td>-</td><td></td></tr><tr><td>ProxyFirst 50 Distilled-M</td><td>3</td><td>0.8073</td><td>0.8107</td><td>0.8119</td><td>0.8122</td><td>0.8124</td><td>-</td><td>-</td><td>-</td><td>-</td><td></td></tr><tr><td>BayesOpt+GP</td><td>2,4,3</td><td>0.7917</td><td>0.8121</td><td>0.8167</td><td>0.8190</td><td>0.8201</td><td>0.8206</td><td>0.8210</td><td>0.8212</td><td>0.8213</td><td>0.8214</td></tr><tr><td>BayesOpt+GP+P with LogprobAvg</td><td>4</td><td>0.7956</td><td>0.8123</td><td>0.8166</td><td>0.8187</td><td>0.8198</td><td>0.8205</td><td>0.8208</td><td>0.8210</td><td>0.8213</td><td>0.8214</td></tr><tr><td>BayesOpt+GP+P with 200 Distilled-S</td><td>4,3</td><td>0.8081</td><td>0.8165</td><td>0.8190</td><td>0.8200</td><td>0.8207</td><td>0.8210</td><td>0.8212</td><td>0.8213</td><td>0.8214</td><td>0.8215</td></tr><tr><td>BayesOpt+GP+P with 200 Distilled-M</td><td>4,3</td><td>0.8119</td><td>0.8182</td><td>0.8199</td><td>0.8205</td><td>0.8209</td><td>0.8211</td><td>0.8213</td><td>0.8214</td><td>0.8215</td><td>0.8215</td></tr><tr><td>BayesOpt+GP+P with 50 Distilled-S</td><td>4,3</td><td>0.8054</td><td>0.8153</td><td>0.8184</td><td>0.8196</td><td>0.8204</td><td>0.8208</td><td>0.8210</td><td>0.8213</td><td>0.8214</td><td>0.8214</td></tr><tr><td>BayesOpt+GP+P with 50 Distilled-M</td><td>4,3</td><td>0.8073</td><td>0.8164</td><td>0.8187</td><td>0.8200</td><td>0.8207</td><td>0.8209</td><td>0.8211</td><td>0.8213</td><td>0.8214</td><td>0.8215</td></tr></table>
|
| 343 |
+
|
| 344 |
+
<table><tr><td rowspan="2">Method</td><td rowspan="2">Figure</td><td rowspan="2">110</td><td rowspan="2">120</td><td rowspan="2">130</td><td rowspan="2">140</td><td colspan="6">CometKiwi runs</td></tr><tr><td>150</td><td>160</td><td>170</td><td>180</td><td>190</td><td>200</td></tr><tr><td>UniqRandom</td><td>2</td><td>0.8182</td><td>0.8188</td><td>0.8192</td><td>0.8197</td><td>0.8200</td><td>0.8205</td><td>0.8208</td><td>0.8211</td><td>0.8214</td><td>0.8216</td></tr><tr><td>LogprobAvg</td><td>2</td><td>0.8199</td><td>0.8203</td><td>0.8205</td><td>0.8209</td><td>0.8211</td><td>0.8212</td><td>0.8213</td><td>0.8214</td><td>0.8216</td><td>0.8216</td></tr><tr><td>LogprobSum</td><td>2</td><td>0.8139</td><td>0.8156</td><td>0.8170</td><td>0.8180</td><td>0.8188</td><td>0.8196</td><td>0.8204</td><td>0.8209</td><td>0.8212</td><td>0.8216</td></tr><tr><td>HillClimbing</td><td>2</td><td>0.8203</td><td>0.8206</td><td>0.8208</td><td>0.8209</td><td>0.8211</td><td>0.8213</td><td>0.8214</td><td>0.8215</td><td>0.8216</td><td>0.8216</td></tr><tr><td>BayesOpt+GP</td><td>2,4,3</td><td>0.8215</td><td>0.8215</td><td>0.8215</td><td>0.8216</td><td>0.8216</td><td>0.8216</td><td>0.8216</td><td>0.8216</td><td>0.8216</td><td>0.8216</td></tr><tr><td>BayesOpt+GP+P with LogprobAvg</td><td>4</td><td>0.8214</td><td>0.8215</td><td>0.8215</td><td>0.8216</td><td>0.8216</td><td>0.8216</td><td>0.8216</td><td>0.8216</td><td>0.8216</td><td>0.8216</td></tr></table>
|
| 345 |
+
|
| 346 |
+
Table 3: Exact values (selected candidate score) for Figures 2 to 4.
|
| 347 |
+
|
| 348 |
+
# A Statistical Significance
|
| 349 |
+
|
| 350 |
+
We measure statistical significance between two methods based on the final candidate CometKiwi scores with either budget 30, 60, 90, or across the budget range from 10 to 190 in Table 4. To determine whether one method is better than another one, we use one-sided paired Student's t-test with p-value threshold 0.01 which is run across the individual samples.
|
| 351 |
+
|
| 352 |
+
<table><tr><td>Budget 30</td><td>UniqRandom</td><td>LogprobAvg</td><td>LogprobSum</td><td>HillClimbing</td><td>BayesOpt+GP</td><td>Budget 60</td><td>UniqRandom</td><td>LogprobAvg</td><td>LogprobSum</td><td>HillClimbing</td><td>BayesOpt+GP</td></tr><tr><td>LogprobAvg</td><td>←</td><td>←</td><td>↑</td><td>↑</td><td></td><td>LogprobAvg</td><td>←</td><td>←</td><td>↑</td><td>↑</td><td></td></tr><tr><td>LogprobSum</td><td>↑</td><td>↑</td><td></td><td>↑</td><td>↑</td><td>LogprobSum</td><td>↑</td><td>↑</td><td></td><td>↑</td><td>↑</td></tr><tr><td>HillClimbing</td><td>←</td><td>←</td><td>←</td><td></td><td>↑</td><td>HillClimbing</td><td>←</td><td>←</td><td>←</td><td></td><td>↑</td></tr><tr><td>ProxyFirst 200 Distilled-S</td><td>←</td><td>←</td><td>←</td><td>←</td><td></td><td>ProxyFirst 200 Distilled-S</td><td>←</td><td>←</td><td>←</td><td>↑</td><td></td></tr><tr><td>ProxyFirst 200 Distilled-M</td><td>←</td><td>←</td><td>←</td><td>←</td><td>←</td><td>ProxyFirst 200 Distilled-M</td><td>←</td><td>←</td><td>←</td><td>←</td><td></td></tr><tr><td>ProxyFirst 50 Distilled-S</td><td>←</td><td>←</td><td>←</td><td>↑</td><td>↑</td><td></td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>ProxyFirst 50 Distilled-M</td><td>←</td><td>←</td><td>←</td><td></td><td>↑</td><td></td><td></td><td></td><td></td><td></td><td></td></tr><tr><td>BayesOpt+GP</td><td>←</td><td>←</td><td>←</td><td>←</td><td></td><td>BayesOpt+GP</td><td>←</td><td>←</td><td>←</td><td></td><td></td></tr><tr><td>BayesOpt+GP+P with LogprobAvg</td><td>←</td><td>←</td><td>←</td><td>←</td><td></td><td>BayesOpt+GP+P with LogprobAvg</td><td>←</td><td>←</td><td>←</td><td></td><td></td></tr><tr><td>BayesOpt+GP+P with 200 Distilled-S</td><td>←</td><td>←</td><td>←</td><td>←</td><td>←</td><td>BayesOpt+GP+P with 200 Distilled-S</td><td>←</td><td>←</td><td>←</td><td>←</td><td></td></tr><tr><td>BayesOpt+GP+P with 200 Distilled-M</td><td>←</td><td>←</td><td>←</td><td>←</td><td>←</td><td>BayesOpt+GP+P with 200 Distilled-M</td><td>←</td><td>←</td><td>←</td><td>←</td><td></td></tr><tr><td>BayesOpt+GP+P with 50 Distilled-S</td><td>←</td><td>←</td><td>←</td><td>←</td><td>←</td><td>BayesOpt+GP+P with 50 Distilled-S</td><td>←</td><td>←</td><td>←</td><td>←</td><td></td></tr><tr><td>BayesOpt+GP+P with 50 Distilled-M</td><td>←</td><td>←</td><td>←</td><td>←</td><td>←</td><td>BayesOpt+GP+P with 50 Distilled-M</td><td>←</td><td>←</td><td>←</td><td>←</td><td></td></tr></table>
|
| 353 |
+
|
| 354 |
+
<table><tr><td>Budget 90</td><td>UniqRandom</td><td>LogprobAvg</td><td>LogprobSum</td><td>HillClimbing</td><td>BayesOpt+GP</td><td>Across budgets 10 to 190</td><td>UniqRandom</td><td>LogprobAvg</td><td>LogprobSum</td><td>HillClimbing</td><td>BayesOpt+GP</td></tr><tr><td>LogprobAvg</td><td>←</td><td>←</td><td>↑</td><td>↑</td><td></td><td>LogprobAvg</td><td>←</td><td>←</td><td>↑</td><td>↑</td><td></td></tr><tr><td>LogprobSum</td><td>↑</td><td>↑</td><td>↑</td><td>↑</td><td></td><td>LogprobSum</td><td>↑</td><td>↑</td><td>↑</td><td>↑</td><td></td></tr><tr><td>HillClimbing</td><td>←</td><td>←</td><td>←</td><td>↑</td><td></td><td>HillClimbing</td><td>←</td><td>←</td><td>←</td><td>↑</td><td></td></tr><tr><td>ProxyFirst 200 Distilled-S</td><td>←</td><td>←</td><td>←</td><td>←</td><td>↑</td><td>ProxyFirst 200 Distilled-S</td><td>←</td><td>←</td><td>←</td><td>←</td><td></td></tr><tr><td>ProxyFirst 200 Distilled-M</td><td>←</td><td>←</td><td>←</td><td>←</td><td></td><td>ProxyFirst 200 Distilled-M</td><td>←</td><td>←</td><td>←</td><td>←</td><td></td></tr><tr><td>BayesOpt+GP+P</td><td>←</td><td>←</td><td>←</td><td>←</td><td></td><td>BayesOpt+GP</td><td>←</td><td>←</td><td>←</td><td>←</td><td></td></tr><tr><td>BayesOpt+GP+P with LogprobAvg</td><td>←</td><td>←</td><td>←</td><td>←</td><td></td><td>BayesOpt+GP+P with LogprobAvg</td><td>←</td><td>←</td><td>←</td><td>←</td><td></td></tr><tr><td>BayesOpt+GP+P with 200 Distilled-S</td><td>←</td><td>←</td><td>←</td><td>←</td><td></td><td>BayesOpt+GP+P with 200 Distilled-S</td><td>←</td><td>←</td><td>←</td><td>←</td><td></td></tr><tr><td>BayesOpt+GP+P with 200 Distilled-M</td><td>←</td><td>←</td><td>←</td><td>←</td><td>←</td><td>BayesOpt+GP+P with 200 Distilled-M</td><td>←</td><td>←</td><td>←</td><td>←</td><td></td></tr><tr><td>BayesOpt+GP+P with 50 Distilled-S</td><td>←</td><td>←</td><td>←</td><td>←</td><td></td><td>BayesOpt+GP+P with 50 Distilled-S</td><td>←</td><td>←</td><td>←</td><td>←</td><td></td></tr><tr><td>BayesOpt+GP+P with 50 Distilled-M</td><td>←</td><td>←</td><td>←</td><td>←</td><td></td><td>BayesOpt+GP+P with 50 Distilled-M</td><td>←</td><td>←</td><td>←</td><td>←</td><td></td></tr></table>
|
| 355 |
+
|
| 356 |
+
Table 4: Statistical significance comparison between proposed methods across various CometKiwi calls budgets. Within a cell, $\uparrow$ means that the column method (in header) is statistically significantly better than the row method and $\leftarrow$ means the opposite. If a cell is empty, none of the methods are significantly better than the other one. For example, in Budget 30 (top left) table, in third row and first column, $\leftarrow$ means that HillClimbing is significantly better than UniqRandom in the setup of budget of 30.
|
2025/A Bayesian Optimization Approach to Machine Translation Reranking/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:d6f69ee2f419162fbe8f4ff0f5246936d1b17f51479b6c0d5480a5ed9f0815da
|
| 3 |
+
size 711131
|
2025/A Bayesian Optimization Approach to Machine Translation Reranking/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2025/A Cognitive Evaluation Benchmark of Image Reasoning and Description for Large Vision-Language Models/4d3753eb-da4e-4fdc-b38d-da331bf2b58e_content_list.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2025/A Cognitive Evaluation Benchmark of Image Reasoning and Description for Large Vision-Language Models/4d3753eb-da4e-4fdc-b38d-da331bf2b58e_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2025/A Cognitive Evaluation Benchmark of Image Reasoning and Description for Large Vision-Language Models/4d3753eb-da4e-4fdc-b38d-da331bf2b58e_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:bb79319b9068c75bcac86bb28a347f849d595bc92081386c8b47d73c40d13404
|
| 3 |
+
size 1592206
|
2025/A Cognitive Evaluation Benchmark of Image Reasoning and Description for Large Vision-Language Models/full.md
ADDED
|
@@ -0,0 +1,558 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# A Cognitive Evaluation Benchmark of Image Reasoning and Description for Large Vision-Language Models
|
| 2 |
+
|
| 3 |
+
Xiujie Song $^{1}$ , Mengyue Wu $^{1*}$ , Kenny Q. Zhu $^{2*}$ , Chunhao Zhang $^{1}$ , Yanyi Chen $^{3}$
|
| 4 |
+
|
| 5 |
+
\(^{1}\)X-LANCE Lab, Department of Computer Science and Engineering MoE Key Lab of Artificial Intelligence, AI Institute Shanghai Jiao Tong University, Shanghai, China \(^{2}\)University of Texas at Arlington, Arlington, Texas, USA \(^{3}\)University of Chicago, Chicago, Illinois, USA \(^{1}\{xiujiesong, mengyuewu\} @sjtu.edu.cn, \({ }^{2}\)kenny.zhu@uta.edu
|
| 6 |
+
|
| 7 |
+
# Abstract
|
| 8 |
+
|
| 9 |
+
Large Vision-Language Models (LVLMs), despite their recent success, are hardly comprehensively tested for their cognitive abilities. Inspired by the prevalent use of the Cookie Theft task in human cognitive tests, we propose a novel evaluation benchmark to evaluate high-level cognitive abilities of LVLMs using images with rich semantics. The benchmark consists of 251 images along with comprehensive annotations. It defines eight reasoning capabilities and comprises an image description task and a visual question answering task. Our evaluation of well-known LVLMs shows that there is still a significant gap in cognitive abilities between LVLMs and humans<sup>1</sup>.
|
| 10 |
+
|
| 11 |
+
# 1 Introduction
|
| 12 |
+
|
| 13 |
+
Recently, with the emergence of Large Language Models (LLMs) such as GPT-4 (OpenAI, 2023), the cognitive abilities of language models have reached a new level (Zhuang et al., 2023). They demonstrate remarkable performance in many tasks (Bubeck et al., 2023). In Vision-Language (VL), several researchers (Zhu et al., 2023b; Liu et al., 2023b; Ye et al., 2023) endeavor to boost Vision-Language Pre-trained Models (VLPMs) by integrating powerful LLMs (Touvron et al., 2023; Chiang et al., 2023), referred to as Large Vision-Language Models (LVLMs) (Li et al., 2023c). With LLM serving as the "brain", the cognitive abilities of LVLMs are enhanced, enabling them to tackle more challenging tasks (Yang et al., 2023). Some state-of-the-art LVLMs, such as GPT-4o (OpenAI, 2023), are progressing toward human-level cognitive abilities. Thus, there is growing interest in evaluating cognitive abilities of LVLMs. Though some LVLM evaluation benchmarks, such as MME (Fu et al., 2023), MMBench (Liu et al., 2023c), and SEED
|
| 14 |
+
|
| 15 |
+
Bench (Li et al., 2023a), evaluate cognitive reasoning ability as one aspect of their evaluation, they do not provide a comprehensive evaluation of higher-level reasoning abilities. Most of the images they use contain less semantics and thus require relatively little reasoning to understand.
|
| 16 |
+
|
| 17 |
+

|
| 18 |
+
Figure 1: Cookie Theft picture description task. The descriptions in the green frame and the orange frame were respectively produced by a healthy 75-year-old man and a 66-year-old woman with probable AD dementia $^2$ .
|
| 19 |
+
|
| 20 |
+
In our study, we draw inspiration from the Cookie Theft picture description task (Figure 1), a key component of the Boston Diagnostic Aphasia Examination (Goodglass et al., 2001), which
|
| 21 |
+
|
| 22 |
+
is widely used in clinical practice within speech-language pathology for language and cognitive function screening (Cummings, 2019; Mueller et al., 2018). Notably, despite being designed more than half a century ago, this picture remains prevalent in contemporary psychological discussions.
|
| 23 |
+
|
| 24 |
+
Is it possible to transfer the success of the Cookie Theft picture description in human cognitive tests to evaluating the cognitive abilities of LVLMs? Linguists and psychologists (Cummings, 2019) conducted an analysis to determine the factors contributing to the success of the Cookie Theft picture. The study reveals that the narrative includes information with varying levels of importance and encompasses a broad range of semantic categories. It is observed that during the description of the Cookie Theft picture, individuals with intact cognitive functions exhibit their cognitive prowess by logically deducing the events and their interconnections, the relationships between characters and their mental states, etc. In contrast, those with cognitive impairments tend to merely list the superficial aspects of the situation without deeper reasoning. For instance, in Figure 1, by comparing the descriptions produced by the healthy man and the woman with probable Alzheimer's disease (AD) dementia, we can identify the following differences:
|
| 25 |
+
|
| 26 |
+
- The description produced by the healthy man used “mother” instead of “lady”, indicating reasoning about character relationship.
|
| 27 |
+
- The healthy man used "stealing cookies" instead of "taking cookies", indicating his reasoning about this high-level event. The description produced by the patient even did not mention this event at all.
|
| 28 |
+
- The healthy man used "the mother is preoccupied" and "happy" to describe people's mental state.
|
| 29 |
+
- The description reflected the causal relationships between events. "The kids are stealing cookies", because "the mother is preoccupied" and "not paying attention to her kids".
|
| 30 |
+
|
| 31 |
+
Through these reasoning processes, the difference in cognitive abilities between the two individuals is reflected in their descriptions. A picture that can evaluate cognitive functions needs to be carefully designed and crafted. Tasnim et al. (2022) introduced guidelines for drawing pictures similar to
|
| 32 |
+
|
| 33 |
+
Cookie Theft, which is consistent with the findings mentioned above. Generally speaking, compared to ordinary images, Cookie Theft-like images feature i) a prominent story theme, ii) richer content, iii) display complex relationships among entities, and thus require stronger cognitive abilities to understand and describe.
|
| 34 |
+
|
| 35 |
+
With the above design principles, we propose to construct a Cognitive Evaluation Benchmark, referred to as CogBench, to evaluate cognitive abilities of LVLMs mainly from the reasoning perspective using high-quality Cookie Theft-like images. CogBench defines eight core cognitive reasoning capabilities, including reasoning about special time, location, character, character relationship, event, event relationship, next moment event and mental state. Both a generative Image Description task and a discriminative Visual Question Answering (VQA) task are designed. Our main contributions are as follows:
|
| 36 |
+
|
| 37 |
+
- To the best of our knowledge, this is the first-of-its-kind attempt to incorporate the concept of the well-known Cookie Theft picture description task, originally designed for human cognitive testing, into the cognitive evaluation of LVLMs.
|
| 38 |
+
- Taking insights from human cognition research, we are the first to define Cookie Theft like images with eight reasoning dimensions and to create a dataset with semantically complex images aligned with the Cookie Theft. This approach allows for a comprehensive evaluation of the visual reasoning capabilities of LVLMs across these dimensions.
|
| 39 |
+
- Our evaluation on existing LVLMs shows that a significant gap exists between the cognitive abilities of LVLMs and human beings, indicatingCogBench will be a valuable evaluation benchmark in the near future.
|
| 40 |
+
|
| 41 |
+
# 2 Dataset Construction
|
| 42 |
+
|
| 43 |
+
In this section, we will introduce the construction of CogBench, detailing its image collection, annotation process, tasks, and data statistics.
|
| 44 |
+
|
| 45 |
+
# 2.1 Image Collection
|
| 46 |
+
|
| 47 |
+
Building on previous studies (Cummings, 2019; Tasnim et al., 2022), we establish the following criteria for collecting Cookie Theft-like images that we propose:
|
| 48 |
+
|
| 49 |
+

|
| 50 |
+
(a)
|
| 51 |
+
|
| 52 |
+

|
| 53 |
+
Images for visual reasoning from previous datasets
|
| 54 |
+
(b)
|
| 55 |
+
|
| 56 |
+

|
| 57 |
+
(c)
|
| 58 |
+
|
| 59 |
+

|
| 60 |
+
(d)
|
| 61 |
+
|
| 62 |
+

|
| 63 |
+
Images in CogBench (Ours)
|
| 64 |
+
(e)
|
| 65 |
+
|
| 66 |
+

|
| 67 |
+
(f)
|
| 68 |
+
|
| 69 |
+

|
| 70 |
+
(g)
|
| 71 |
+
Figure 2: Comparison between our images and those from previous visual reasoning tasks. Our images contain rich entities and CoRs. Compared to our images, image (a) has fewer entities and CoRs, while image (b) and (c) have some entities but fewer CoRs.
|
| 72 |
+
|
| 73 |
+

|
| 74 |
+
(h)
|
| 75 |
+
|
| 76 |
+
Rule 1: Storytelling The image depicts an interesting story. For instance, the Cookie Theft picture tells the story of a mother busy washing dishes while two kids take the opportunity to stand on a stool and sneakily steal cookies.
|
| 77 |
+
|
| 78 |
+
Rule 2: Rich Chain-of-Reasonings Images should display rich Chain-of-Reasonings (CoRs) in a scene. A CoR connects low-level observations in an image to produce a high-level reasoning conclusion or connects the cause and effect of events. For example, "The mother is busy washing dishes. + The boy is standing on the stool behind the mother. + The girl standing by the boy is shushing him. + The boy is fetching cookies from the jar in the cabinet. → The boy and girl are stealing cookies." is a CoR about the high-level event "stealing cookies". Note that a story is usually constructed through various CoRs.
|
| 79 |
+
|
| 80 |
+
Rule 3: Restricted Content Complexity Images should contain rich content but not be overly complex. The number of entities should be sufficient to support a good story while being restricted to emphasize the key points effectively.
|
| 81 |
+
|
| 82 |
+
With the above criteria, we manually collect pic-
|
| 83 |
+
|
| 84 |
+
tures from Pinterest<sup>3</sup>, and the Cookie Theft picture is also included. Figure 2 shows the differences between our images and those from other datasets by examples. Note that most of the images in CogBench are in a painting style because they are abstracted from reality and tend to contain richer CoRs than real-world images, making them more effective for picture description tasks. Even so, painting-style images that meet our criteria are still rare, highlighting the value of our dataset.
|
| 85 |
+
|
| 86 |
+
# 2.2 Image Annotation
|
| 87 |
+
|
| 88 |
+
Human annotators, mostly undergraduate or graduate students aged 18-28, are hired to annotate the collected images. As shown in Figure 3, the annotation includes three parts: [Entities], [CoRs] and [Description]. By annotating [Entities] and [CoRs], we aim to evaluate the low-level recognition ability and high-level cognitive reasoning ability of models respectively based on their descriptions. [Description] is annotated as the reference description for the image. The three parts are annotated in that order.
|
| 89 |
+
|
| 90 |
+
[Entity Annotation] We ask annotators to list as many entities in the image as possible and entities that are difficult to recognize should be omitted.
|
| 91 |
+
|
| 92 |
+
[CoR Annotation] To evaluate model cognition in a fine-grained manner, we design eight reasoning dimensions based on studies in human cognition (Cummings, 2019; Byom and Mutlu, 2013; Addis et al., 2007) and Computer Vision (CV) (Fu et al., 2022; Park et al., 2020; Zellers et al., 2019; Shen et al., 2020). CoRs for these dimensions are annotated:
|
| 93 |
+
|
| 94 |
+
- Special Time Reasoning: reasoning about the special time of the story in the image, e.g., festivals.
|
| 95 |
+
- Location Reasoning: reasoning about the location of the story in the image, e.g., near a school.
|
| 96 |
+
- Character Reasoning: reasoning about the characters of subjects in the image, e.g., a doctor.
|
| 97 |
+
- Character Relationship Reasoning: reasoning about the relationships between characters in the image, e.g., "the woman is the mother of the kids."
|
| 98 |
+
|
| 99 |
+

|
| 100 |
+
[ENTITIES]: men, women, bus stop, sign, phone booth, newspaper, coat, buildings, snow, wind.
|
| 101 |
+
Figure 3: An example of the Description task from CogBench.
|
| 102 |
+
|
| 103 |
+
# [SPECIAL TIME REASONING]:
|
| 104 |
+
|
| 105 |
+
There is some snow on the ground. $\rightarrow$ It is winter.
|
| 106 |
+
|
| 107 |
+
# [LOCATION REASONING]:
|
| 108 |
+
|
| 109 |
+
There is a sign that says "NO STANDING BUS STOP". $\rightarrow$ It is near a bus stop.
|
| 110 |
+
|
| 111 |
+
# [EVENT REASONING]:
|
| 112 |
+
|
| 113 |
+
It is near a bus stop. + A man is standing on the road side, and peering out onto the road. + A man is standing in a phone booth. + There are also two women. → People are probably waiting for a bus here.
|
| 114 |
+
It is windy and cold. + A man is standing in a phone booth reading a newspaper. → The man is sheltering from the cold wind in the phone booth
|
| 115 |
+
|
| 116 |
+
# [EVENT RELATIONSHIP REASONING]:
|
| 117 |
+
|
| 118 |
+
It is windy and cold. + A man is standing in a phone booth reading a newspaper. $\rightarrow$ Two women are looking at the man.
|
| 119 |
+
|
| 120 |
+
# [MENTAL STATE REASONING]:
|
| 121 |
+
|
| 122 |
+
Inside a phone booth, a man is smiling while looking at a newspaper. + It is windy and cold outside. $\rightarrow$ The man in the phone booth is happy.
|
| 123 |
+
|
| 124 |
+
# [DESCRIPTION]:
|
| 125 |
+
|
| 126 |
+
There is some snow on the ground and it is windy, indicating it is winter. There are two men and two women standing on the roadside. There is a sign that says "NO STANDING BUS STOP", indicating it is near a bus stop. A man is standing on the road side, wrapping his coat tightly around himself, and peering out onto the road. They are probably waiting for a bus here. We can see it is cold. Inside a phone booth, a man is smiling while looking at a newspaper. He is sheltering from the cold wind in the phone booth. He looks happy, because it is warm there. The two women are also wrapping their coats tightly and looking at the man in the phone booth. They are probably friends and standing together. They are unhappy with the man. There are some buildings by the road.
|
| 127 |
+
|
| 128 |
+
- Event Reasoning: reasoning about the high-level events in the current and previous moments in the image. The difference between high-level and low-level events lies in the amount of semantic information they contain. For example, "stealing cookies" is a higher-level event compared to "taking cookies" as it additionally conveys the semantic of "taking advantage without permission or knowledge."
|
| 129 |
+
- Event Relationship Reasoning: reasoning about the causal and temporal relationships between different events in the image. For instance, "the sink is overflowing because the mother left the tap on."
|
| 130 |
+
- Next Moment Event Reasoning: reasoning about the events that will happen in the next moment in the image. For example, "the police officer will reprimand the boy who violates the rules."
|
| 131 |
+
- Mental State Reasoning: reasoning about the mental states of subjects in the image, including their emotions, thoughts, and other psychological states. For example, "the girl appears to be happy."
|
| 132 |
+
|
| 133 |
+
[Description Summary] Annotators are finally asked to write a description that conveys the entire story in the image based on the annotated [Entities] and [CoRs].
|
| 134 |
+
|
| 135 |
+
The complete annotation instruction can be found in Appendix A. Considering different people may have different understanding about some images, we ask three annotators to annotate each image. Then, we draw on the idea of majority voting to merge the three annotations into one. For
|
| 136 |
+
|
| 137 |
+
[Entities] and [CoRs], we first accept most of the entities and CoRs that are annotated by at least two annotators. Other entities and CoRs are also included if reasonable. The final [Description] is obtained by modifying the best annotated [Description] with the merged [Entities] and [CoRs]. We discard images where there is a significant difference in understanding among the three annotators.
|
| 138 |
+
|
| 139 |
+
# 2.3 Tasks in CogBench
|
| 140 |
+
|
| 141 |
+
We design a generative Image Description task and a discriminative Multiple-Choice Question Answering task in CogBench.
|
| 142 |
+
|
| 143 |
+
# 2.3.1 Image Description Task
|
| 144 |
+
|
| 145 |
+
This is the primary task of the benchmark. The difference between our description task and existing image description tasks (Xie et al., 2022; Zhu et al., 2023a; Zhuge et al., 2023) is that we expect LVLMs to understand and describe the story in the image through high-level cognitive reasoning. For instance, in Figure 3, the description of the image should not only include what is in the picture but also focus on elucidating the story of "on a cold winter day, a man is reading a newspaper in a phone booth near the bus stop to escape the cold, while two passing women express their displeasure upon seeing that" through a series of reasoning processes.
|
| 146 |
+
|
| 147 |
+
# 2.3.2 Visual Question Answering Task
|
| 148 |
+
|
| 149 |
+
The VQA task features standard four-option multiple-choice questions, easing the evaluation process. Like the Description task, VQA questions involve different types of high-level cognitive reasoning, as illustrated by the question about event
|
| 150 |
+
|
| 151 |
+
# [EVENT REASONING]:
|
| 152 |
+
|
| 153 |
+
It is windy and cold. + A man is standing in a phone booth reading a newspaper. $\rightarrow$ The man is sheltering from the cold wind in the phone booth.
|
| 154 |
+
|
| 155 |
+
# [EVENT REASONING]:
|
| 156 |
+
|
| 157 |
+
What is the man doing in the phone booth?
|
| 158 |
+
A. Making a phone call.
|
| 159 |
+
B. Reading a book.
|
| 160 |
+
C. Avoiding someone he doesn't want to see.
|
| 161 |
+
D. Sheltering from the wind.
|
| 162 |
+
Correct Answer: [D]
|
| 163 |
+
|
| 164 |
+
Figure 4: Generating a multiple-choice question based on an [Event Reasoning] CoR annotation.
|
| 165 |
+
|
| 166 |
+
in Figure 4. We use GPT-4 to assist in generating questions based on the annotations from Section 2.2. With the annotated CoRs, both the conclusion (right side of $\rightarrow$ ) and the reasoning behind it (left side of $\rightarrow$ ) in each CoR can be used to generate questions and corresponding options, as depicted in Figure 4. These components in each CoR provide the correct options directly to the questions generated based on them. Specifically, this process is two-fold. 1) Automated Question Generation: We use GPT-4 to generate questions for CogBench images, tailoring prompts for each reasoning category to produce CoR-related questions. The key point is to prompt GPT-4 to generate higher-quality distractors. An example prompt for this CoR-based GPT-assisted question generation approach is provided in Appendix B. 2) Manual Refinement: Despite GPT-4's capabilities, some generated questions are not challenging enough. In this stage, we manually refine the questions, ensuring they do not overtly favor the correct answer and that distractors are closely related to the question and misleading. Additionally, ChatGPT aids in identifying and filtering out simple questions that can be answered without image input.
|
| 167 |
+
|
| 168 |
+
# 2.4 CogBench Statistics
|
| 169 |
+
|
| 170 |
+
CogBench consists of 251 semantically rich images with a total of 2670 entities, 2243 CoRs, 251 descriptions and 2577 questions, indicating the content contained in each image is complex, showcased in Table 1. The number of CoRs of event-related reasoning and [Mental State Reasoning] is large, which is a manifestation of the rich interesting stories in the images.
|
| 171 |
+
|
| 172 |
+
# 3 Experiments
|
| 173 |
+
|
| 174 |
+
We will evaluate the selected LVLMs, detail evaluation strategies and discuss the results in this section.
|
| 175 |
+
|
| 176 |
+
# 3.1 Large Vision-Language Models
|
| 177 |
+
|
| 178 |
+
We evaluate a selection of recent representative open-source and closed-source LVLMs, including
|
| 179 |
+
|
| 180 |
+
InstructBLIP-7B (Dai et al., 2023), Qwen-VL series models (Bai et al., 2023; Wang et al., 2024), mPLUG-Owl-2 (Ye et al., 2023), LLaVA series models (Liu et al., 2023a, 2024; Li et al., 2024), ShareGPT4V (Chen et al., 2023a), CogVLM series models (Wang et al., 2023; Hong et al., 2024), InternVL2-26B (Chen et al., 2023c), GPT-4V and GPT-4o (OpenAI, 2023). A brief introduction to these models is provided in Appendix C.
|
| 181 |
+
|
| 182 |
+
# 3.2 CogBench Evaluation Strategy
|
| 183 |
+
|
| 184 |
+
# 3.2.1 Evaluation of the Description Task
|
| 185 |
+
|
| 186 |
+
Evaluation Modes For the Description task, we set up two evaluation modes: Spontaneous Description and Directed Reasoning. In the Spontaneous Description mode, we prompt the LVLMs with the following instruction to obtain detailed image descriptions: "Describe this image in detail." This mode is more similar to the Cookie Theft picture description task, which aims to stimulate spontaneous descriptions (Matias-Guiu et al., 2022). It can help analyze the behavior of LVLMs when they describe images. For the Directed Reasoning mode, the corresponding prompt is: "Please provide a detailed description of the story depicted in the image, including high-level reasoning about the time and location, the roles and relationships of the characters, the events and their causal relationships, what might happen next, and the mental states of the characters." This mode simplifies the Description task compared to the Spontaneous Description mode and aims to evaluate whether models can reason correctly when they know the directions we expect.
|
| 187 |
+
|
| 188 |
+
Evaluation Metrics We consider model performance at two levels: low-level Recognition ability and high-level Cognition ability. Evaluation metrics for both levels are calculated based on recall scores, referred to as Recognition Score and Cognition Score, respectively.
|
| 189 |
+
|
| 190 |
+
The Recognition Score is calculated as the ratio of recognized [Entities] to annotated [Entities] across all images. First, we use $\mathsf{spaCy}^4$ to extract nouns from the model-generated description, and use sentence-transformers $^5$ to encode the annotated [Entities] and extracted nouns into embeddings.
|
| 191 |
+
|
| 192 |
+
<table><tr><td></td><td>Time</td><td>Location</td><td>Character</td><td>Character Relationship</td><td>Event</td><td>Event Relationship</td><td>Next Moment Event</td><td>Mental State</td></tr><tr><td>CoR</td><td>47</td><td>177</td><td>106</td><td>263</td><td>701</td><td>425</td><td>107</td><td>417</td></tr><tr><td>QA</td><td>86</td><td>220</td><td>162</td><td>317</td><td>658</td><td>402</td><td>135</td><td>597</td></tr></table>
|
| 193 |
+
|
| 194 |
+
Then, we calculate the cosine similarity between the embeddings of the [Entities] and the nouns. For each entity, if the cosine similarity score between the entity and any noun is greater than a threshold (0.6 in this paper), we consider the entity to be recognized by the model.
|
| 195 |
+
|
| 196 |
+
For the Cognition Score, we calculate the scores for each of the eight cognitive reasoning dimensions, as well as an overall score using GPT-4. To enhance objectivity and granularity, GPT-4 is utilized for a binary classification task to assess if a generated description includes the semantics of each annotated CoR. For reasoning types other than [Event Relationship Reasoning], we task GPT-4 with determining whether the conclusion in each CoR is mentioned in the description. For [Event Relationship Reasoning], we task GPT-4 with determining whether each causal relationship between events (i.e., the entire CoR), as annotated, is present in the description. The CoR scores for each dimension are then used to compute a recall score for each respective type. The overall Cognition Score is the sum of all CoR scores divided by the total number of CoRs. The corresponding prompts are shown in Appendix D.
|
| 197 |
+
|
| 198 |
+
# 3.2.2 Evaluation of the VQA Task
|
| 199 |
+
|
| 200 |
+
For multiple-choice questions in the VQA task, we use accuracy as the evaluation metric. As questions are generated based on CoRs, we can also calculate the accuracy for each reasoning capability as well as the overall cognitive capability.
|
| 201 |
+
|
| 202 |
+
# 3.3 Results of the Description Task
|
| 203 |
+
|
| 204 |
+
We evaluate the LVLMs on the Description task in terms of both recognition and cognition abilities. As a reference, we also calculate traditional image captioning evaluation metrics by comparing the model-generated description with the annotated reference [Description], and details are shown in Appendix E.
|
| 205 |
+
|
| 206 |
+
# 3.3.1 Recognition
|
| 207 |
+
|
| 208 |
+
Table 2 shows the Recognition Scores of models on the Description task. InternVL2-26B, LLaVA-OV-7B-Chat, GPT-4o and Qwen2-VL achieve rela
|
| 209 |
+
|
| 210 |
+
tively better performance than other models, which means they can recognize and describe more entities. GPT-4V, CogVLM and CogVLM2 also demonstrate competitive performance. It can be concluded that the recognition ability of open-source LVLMs is approaching that of GPT-4o, with some models even surpassing it. Nevertheless, other open-source LVLMs have a significant gap in recognition capability compared to GPT-4, indicating some LVLMs still have room for development before reaching the recognition capability of GPT-4. Note that some models perform worse in mode 2 than in mode 1, which could be because these models focus more on high-level semantic reasoning and overlook the description of some low-level entities. Although many LVLMs perform well, such as InternVL2-26B and GPT-4o, they still miss many entities, suggesting room for improvement in recognition capability.
|
| 211 |
+
|
| 212 |
+
Table 1: Distribution of CoRs and questions in CogBench.
|
| 213 |
+
|
| 214 |
+
<table><tr><td rowspan="2">Model</td><td colspan="2">Recognition Score</td></tr><tr><td>Mode 1</td><td>Mode 2</td></tr><tr><td>InstructBLIP-7B</td><td>40.0</td><td>36.4</td></tr><tr><td>Qwen-VL-Chat</td><td>43.3</td><td>45.8</td></tr><tr><td>LLaVA-v1.5-7B</td><td>39.8</td><td>41.2</td></tr><tr><td>LLaVA-v1.5-13B</td><td>41.0</td><td>39.3</td></tr><tr><td>mPLUG-Owl-2</td><td>37.4</td><td>37.8</td></tr><tr><td>ShareGPT4V-7B</td><td>46.9</td><td>47.3</td></tr><tr><td>ShareGPT4V-13B</td><td>48.7</td><td>47.4</td></tr><tr><td>LLaVA-v1.6-vicuna-7B</td><td>49.3</td><td>51.1</td></tr><tr><td>LLaVA-v1.6-vicuna-13B</td><td>53.3</td><td>53.9</td></tr><tr><td>LLaVA-v1.6-34B</td><td>52.2</td><td>52.0</td></tr><tr><td>CogVLM-Chat</td><td>61.6</td><td>56.5</td></tr><tr><td>CogVLM2-Llama3-Chat</td><td>62.3</td><td>58.9</td></tr><tr><td>InternVL2-26B</td><td>70.7</td><td>65.9</td></tr><tr><td>Qwen2-VL-7B</td><td>66.3</td><td>59.6</td></tr><tr><td>LLaVA-OV-7B-Chat</td><td>67.4</td><td>72.3</td></tr><tr><td>GPT-4V</td><td>62.9</td><td>56.5</td></tr><tr><td>GPT-4o</td><td>66.8</td><td>65.1</td></tr><tr><td>Oracle</td><td colspan="2">92.8</td></tr></table>
|
| 215 |
+
|
| 216 |
+
Table 2: Recognition Scores of LVLMs on the Description task. Mode 1 and Mode 2 refer to the Spontaneous Description mode and the Directed Reasoning mode respectively. For reference, the Recognition Score of Oracle is calculated based on the annotated [Description] in CogBench dataset as an estimated upper bound. Numbers are presented in % with a full score of $100\%$ .
|
| 217 |
+
|
| 218 |
+
<table><tr><td>Model</td><td>Time</td><td>Location</td><td>Character</td><td>Character Relationship</td><td>Event</td><td>Event Relationship</td><td>Next Moment Event</td><td>Mental State</td><td>Overall</td></tr><tr><td>InstructBLIP-7B</td><td>14.9 / 17.0</td><td>54.2 / 59.9</td><td>24.5 / 29.2</td><td>30.0 / 41.1</td><td>10.3 / 9.3</td><td>4.9 / 5.6</td><td>1.9 / 6.5</td><td>17.3 / 22.8</td><td>16.7 / 19.8</td></tr><tr><td>Qwen-VL-Chat</td><td>23.4 / 25.5</td><td>57.1 / 58.8</td><td>28.3 / 31.1</td><td>29.3 / 50.6</td><td>15.1 / 19.4</td><td>12.7 / 13.2</td><td>3.7 / 9.3</td><td>12.0 / 25.7</td><td>19.3 / 26.3</td></tr><tr><td>LLaVA-v1.5-7B</td><td>8.5 / 19.1</td><td>45.2 / 59.3</td><td>15.1 / 18.9</td><td>18.3 / 40.7</td><td>8.8 / 8.6</td><td>4.9 / 5.6</td><td>1.9 / 1.9</td><td>13.2 / 21.8</td><td>12.8 / 18.6</td></tr><tr><td>LLaVA-v1.5-13B</td><td>12.8 / 14.9</td><td>48.6 / 55.9</td><td>17.9 / 19.8</td><td>24.3 / 39.5</td><td>9.8 / 9.3</td><td>4.7 / 5.9</td><td>3.7 / 4.7</td><td>16.3 / 21.3</td><td>15.0 / 18.5</td></tr><tr><td>mPLUG-Owl-2</td><td>6.4 / 12.8</td><td>48.0 / 57.1</td><td>23.6 / 22.6</td><td>21.3 / 41.8</td><td>8.4 / 9.4</td><td>4.0 / 4.5</td><td>1.9 / 3.7</td><td>13.7 / 17.3</td><td>13.6 / 17.9</td></tr><tr><td>ShareGPT4V-7B</td><td>19.1 / 14.9</td><td>60.5 / 58.8</td><td>20.8 / 25.5</td><td>22.4 / 35.0</td><td>10.0 / 12.6</td><td>4.0 / 6.4</td><td>2.8 / 3.7</td><td>15.1 / 17.5</td><td>15.6 / 18.8</td></tr><tr><td>ShareGPT4V-13B</td><td>23.4 / 17.0</td><td>57.1 / 60.5</td><td>23.6 / 29.2</td><td>26.2 / 37.6</td><td>12.4 / 12.4</td><td>6.6 / 7.3</td><td>2.8 / 3.7</td><td>14.4 / 16.1</td><td>17.1 / 19.3</td></tr><tr><td>LLaVA-v1.6-vicuna-7B</td><td>17.0 / 25.5</td><td>61.0 / 62.1</td><td>23.6 / 29.2</td><td>25.1 / 47.9</td><td>12.1 / 14.1</td><td>7.5 / 8.5</td><td>2.8 / 8.4</td><td>17.0 / 23.0</td><td>17.7 / 23.1</td></tr><tr><td>LLaVA-v1.6-vicuna-13B</td><td>17.0 / 27.7</td><td>63.3 / 65.5</td><td>27.4 / 33.0</td><td>23.6 / 42.2</td><td>15.1 / 15.8</td><td>9.6 / 9.9</td><td>0.9 / 2.8</td><td>17.7 / 26.4</td><td>19.3 / 24.1</td></tr><tr><td>LLaVA-v1.6-34B</td><td>25.5 / 27.7</td><td>65.5 / 68.4</td><td>28.3 / 47.2</td><td>22.8 / 48.7</td><td>16.8 / 18.1</td><td>10.8 / 13.4</td><td>3.7 / 10.3</td><td>16.3 / 30.5</td><td>20.2 / 28.3</td></tr><tr><td>CogVLM-Chat</td><td>29.8 / 46.8</td><td>75.1 / 72.3</td><td>40.6 / 50.0</td><td>23.6 / 45.2</td><td>28.1 / 29.1</td><td>21.6 / 27.3</td><td>4.7 / 7.5</td><td>27.1 / 33.3</td><td>29.4 / 35.2</td></tr><tr><td>CogVLM2-Llama3-Chat</td><td>34.0 / 31.9</td><td>73.4 / 75.7</td><td>46.2 / 49.1</td><td>24.7 / 41.1</td><td>28.4 / 32.2</td><td>26.8 / 32.5</td><td>5.6 / 8.4</td><td>30.2 / 40.0</td><td>31.4 / 37.9</td></tr><tr><td>InternVL2-26B</td><td>40.4 / 44.7</td><td>77.4 / 79.7</td><td>48.1 / 54.7</td><td>27.8 / 54.0</td><td>27.4 / 31.5</td><td>24.5 / 27.8</td><td>3.7 / 15.0</td><td>36.7 / 44.1</td><td>32.7 / 40.2</td></tr><tr><td>Qwen2-VL-7B</td><td>38.3 / 34.0</td><td>72.3 / 75.1</td><td>48.1 / 48.1</td><td>19.4 / 53.2</td><td>25.1 / 23.8</td><td>20.0 / 20.2</td><td>4.7 / 9.3</td><td>29.5 / 33.3</td><td>28.4 / 33.1</td></tr><tr><td>LLaVA-OV-7B-Chat</td><td>42.6 / 46.8</td><td>75.1 / 76.3</td><td>50.9 / 48.1</td><td>18.3 / 46.4</td><td>25.5 / 30.4</td><td>25.6 / 26.4</td><td>9.3 / 17.8</td><td>33.8 / 42.4</td><td>30.9 / 37.9</td></tr><tr><td>GPT-4V</td><td>40.4 / 40.4</td><td>74.0 / 70.1</td><td>44.3 / 49.1</td><td>34.2 / 57.4</td><td>28.0 / 33.2</td><td>25.4 / 29.2</td><td>10.3 / 18.7</td><td>42.7 / 51.6</td><td>34.8 / 41.8</td></tr><tr><td>GPT-4o</td><td>38.3 / 51.1</td><td>79.7 / 82.5</td><td>50.9 / 62.3</td><td>29.3 / 73.4</td><td>42.2 / 48.8</td><td>42.1 / 50.6</td><td>11.2 / 34.6</td><td>43.2 / 58.5</td><td>42.7 / 56.5</td></tr><tr><td>Oracle</td><td>91.5</td><td>97.8</td><td>94.3</td><td>81.4</td><td>98.1</td><td>92.2</td><td>89.7</td><td>92.3</td><td>93.2</td></tr></table>
|
| 219 |
+
|
| 220 |
+
Table 3: Cognition Scores of LVLMs on the Description task evaluated by GPT-4. The results of the evaluation under the Spontaneous Description mode and Directed Reasoning mode are presented before and after the “/” in each table cell. For reference, the Cognition Scores of Oracle are calculated based on the annotated [Description] in CogBench dataset as an estimated upper bound. Numbers are presented in % with a full score of $100\%$ .
|
| 221 |
+
|
| 222 |
+
# 3.3.2 Cognition
|
| 223 |
+
|
| 224 |
+
General analysis. Table 3 shows the Cognition Scores of LVLMs on the Description task. GPT-4o achieves the best performance and there is a large performance gap between GPT-4o and other open-source models. Among open-source models, InternVL2-26B, LLaVA-OV-7B-Chat, CogVLM models and Qwen2-VL achieve relatively better performance, with some approaching that of GPT-4V. In terms of different reasoning capabilities, all LVLMs show better performance on [Location Reasoning] than others, probably because it is a kind of relatively lower-level reasoning. Differently, for [Event Reasoning], [Event Relationship Reasoning], and [Next Moment Event Reasoning], most open-source LVLMs show lower performance. The Cognition Scores of some open-source LVLMs are only around or even lower than $10\%$ across these dimensions, indicating they almost do not understand the story in the images. In contrast, GPT-4o shows significantly better performance in these three kinds of reasoning capabilities. Besides, though GPT-4o achieves the best performance, there is also a large gap between its Cognition Scores and the Oracle scores. This indicates that LVLMs still have a lot of room for development in terms of cognitive abilities.
|
| 225 |
+
|
| 226 |
+
Analysis based on different evaluation modes. When the Directed Reasoning mode is applied, significant performance improvements in the LVLMs can be observed compared to the Spontaneous Description mode. This suggests that current LVLMs,
|
| 227 |
+
|
| 228 |
+
while potentially grasping some high-level semantic information in images, cannot spontaneously generate comprehensive descriptions through reasoning from various perspectives without appropriate prompts. Another intriguing finding is that for some open-source LVLMs, shifting from the Spontaneous Description mode to the Directed Reasoning mode results in noticeable performance improvements across various dimensions, such as [Character Relationship Reasoning]; however, for [Event Reasoning] and [Event Relationship Reasoning], the enhancements are rather limited. This further highlights their struggle to fully comprehend events in the images, with event-based reasoning being a key area where open-source LVLMs differ significantly from GPT-4o.
|
| 229 |
+
|
| 230 |
+
Analysis of recognition and cognition abilities. The performance gap between GPT-4o and open-source LVLMs in terms of cognition is obviously larger than that of recognition. This suggests that cognitive reasoning is a more advanced ability than recognition, and that these LVLMs still have significant room for development.
|
| 231 |
+
|
| 232 |
+
# 3.3.3 Case Study
|
| 233 |
+
|
| 234 |
+
Figure 5 shows a failure case of GPT-4o on the Description task under the Directed Reasoning mode. In terms of recognition, GPT-4o shows a good performance by recognizing most annotated entities such as men, women, buildings, phone booth, newspaper, coat, snow, sign and only fails to recognize bus stop and wind. However, GPT-4o
|
| 235 |
+
|
| 236 |
+

|
| 237 |
+
Figure 5: Case study of the Description task. The description is generated by GPT-4o in the Directed Reasoning mode. Recognized entities are marked in blue, and CoRs are marked in green.
|
| 238 |
+
|
| 239 |
+

|
| 240 |
+
|
| 241 |
+
The image depicts a scene set in what appears to be a bustling urban area, possibly mid-20th century based on the fashion, architecture, and presence of a classic phone booth. The scene unfolds on a snowy sidewalk in front of what seems to be a row of apartment buildings. Characters and their roles: 1. Man in the phone booth:- This man, dressed in a beige trench coat and fedora hat, is engaged in a phone conversation. He holds a newspaper and appears to be happily engrossed in his conversation, possibly laughing or smiling. 2. Two women on the sidewalk: - These women, bundled up in winter clothing (one in a fur coat and the other in a red coat), appear to be startled or alarmed. Their body language indicates they are huddled together, possibly sharing a moment of surprise or gossip while glancing furtively at the man. 3. Man bending over in the background: - This individual, whose back is turned to both the phone booth and the women, is bent over presumably to pick something up. He is wearing a dark overcoat and hat, suggesting he is also trying to stay warm in the cold weather.
|
| 242 |
+
|
| 243 |
+
Setting and Time: - The setting is an urban street, likely in North America, given the architecture and signage. The presence of snow and winter coats suggests it is in the winter season, likely mid-20th century based on the style of clothing and the design of the phone booth.
|
| 244 |
+
|
| 245 |
+
Events and Causal Relationships: - The man in the phone booth is happily engaged in his conversation, seemingly unaware of his surroundings. - The women seem startled or shocked, perhaps at something they have overheard or something they have seen. Their gaze and body language suggest they are reacting to a shared experience, possibly connected to the man in the phone booth. - The man in the background is not directly interacting with others, suggesting his actions are unrelated, yet his presence adds an element of everyday normalcy to the scene.
|
| 246 |
+
|
| 247 |
+
Mental States of Characters: - The man in the phone booth appears to be enjoying his conversation, perhaps sharing a joke or good news. - The women exhibit signs of surprise or shock, holding onto each other for comfort or out of fear of what they have encountered. - The man in the background seems focused on his immediate task, oblivious to the tension between the women and the man in the phone booth.
|
| 248 |
+
|
| 249 |
+
Possible Next Events: - The women may discuss or confront the man in the phone booth if he is the source of their alarm. - The man in the phone booth might finish his call oblivious to the commotion he's caused. - The man in the background may finish picking up whatever he is bending over for and continue on his way.
|
| 250 |
+
|
| 251 |
+
Overall, the image captures a moment of everyday life with an intriguing twist, prompting questions about the relationship between the characters and the source of the women's alarm.
|
| 252 |
+
|
| 253 |
+
Recognition Score: 80.0%
|
| 254 |
+
|
| 255 |
+
Cognition Score: $30.0\%$
|
| 256 |
+
|
| 257 |
+
fails to understand the story in the image and gets a $30.0\%$ in terms of cognition. One of the most important reasons is that it does not recognize that the man is in the phone booth to escape the cold, rather than to make a phone call. This case demonstrates that CogBench reveals current LVLMs falling short in cognition, with a gap remaining between their cognitive abilities and human levels.
|
| 258 |
+
|
| 259 |
+
# 3.3.4 Effectiveness of GPT-based Evaluation
|
| 260 |
+
|
| 261 |
+
To validate the GPT-based cognition evaluation method, we manually scored CoRs of 20 images with a binary scale (0/1) and compared the accuracy of various evaluation methods on this subset. Table 4 reveals that GPT-4 offers the highest accuracy, demonstrating that GPT-based evaluation aligns well with human assessment. Therefore, it is effective to assess LVLMs' performance on the Description task. Implementation details of evaluation methods beyond ChatGPT and GPT-4 (Lin, 2004; Zhang et al., 2020; Sellam et al., 2020; He et al., 2021; Yin et al., 2021) can be found in Appendix F.
|
| 262 |
+
|
| 263 |
+
# 3.4 Results of the VQA Task
|
| 264 |
+
|
| 265 |
+
Table 5 shows the performance of LVLMs on the VQA task. GPT-4o achieves the best performance. Among open-source LVLMs, LLaVA-OV-7B-Chat, InternVL2-26B, LLaVA-v1.6-34B, CogVLM2, and Qwen2-VL-7B demonstrate better results, approaching the level of GPT-4o.
|
| 266 |
+
|
| 267 |
+
<table><tr><td>Model</td><td>Accuracy</td></tr><tr><td>ROUGE</td><td>0.656</td></tr><tr><td>BERTScore</td><td>0.635</td></tr><tr><td>BLEURT</td><td>0.620</td></tr><tr><td>DeBERTa</td><td>0.693</td></tr><tr><td>DocNLI</td><td>0.714</td></tr><tr><td>GPT-3.5</td><td>0.807</td></tr><tr><td>GPT-4</td><td>0.833</td></tr></table>
|
| 268 |
+
|
| 269 |
+
Table 4: CoR accuracy of cognition evaluation methods for the Description task.
|
| 270 |
+
|
| 271 |
+
Consistent with previous findings, reasoning about location is also the easiest for LVLMs and event-related reasoning dimensions are more difficult. There is also a large gap between the performance of LVLMs and humans. Note that the accuracy of Human in Table 5 is calculated based on the responses of five healthy people. They all have obtained a bachelor's degree and are between the ages of 20 and 30. Furthermore, some LVLMs perform differently on the two tasks, e.g. LLaVA-v1.6-34B, which highlights the necessity of including both tasks in the design of CogBench.
|
| 272 |
+
|
| 273 |
+
# 4 Related Work
|
| 274 |
+
|
| 275 |
+
Evaluation Benchmark for LVLMs. To better understand the emerging capabilities of LVLMs, various evaluation benchmarks have been proposed. LVLM-eHub (Xu et al., 2023) evaluates LVLMs' capabilities across six categories using various publicly available CV datasets. MME (Fu et al.,
|
| 276 |
+
|
| 277 |
+
<table><tr><td>Model</td><td>Time</td><td>Location</td><td>Character</td><td>Character Relationship</td><td>Event</td><td>Event Relationship</td><td>Next Moment Event</td><td>Mental State</td><td>Overall</td></tr><tr><td>InstructBLIP-7B</td><td>60.5</td><td>71.4</td><td>48.8</td><td>54.9</td><td>40.3</td><td>36.8</td><td>46.7</td><td>47.6</td><td>47.4</td></tr><tr><td>Qwen-VL-Chat</td><td>65.1</td><td>82.3</td><td>60.5</td><td>54.3</td><td>50.9</td><td>45.0</td><td>47.4</td><td>51.1</td><td>54.0</td></tr><tr><td>LLaVA-V1.5-7B</td><td>58.1</td><td>81.4</td><td>54.3</td><td>54.6</td><td>45.9</td><td>45.0</td><td>54.1</td><td>52.6</td><td>52.8</td></tr><tr><td>LLaVA-V1.5-13B</td><td>69.8</td><td>82.3</td><td>65.4</td><td>59.9</td><td>50.2</td><td>47.3</td><td>57.8</td><td>57.1</td><td>57.3</td></tr><tr><td>mPLUG-Owl-2</td><td>51.2</td><td>81.8</td><td>58.6</td><td>54.6</td><td>46.0</td><td>47.5</td><td>47.4</td><td>51.8</td><td>52.7</td></tr><tr><td>ShareGPT4V-7B</td><td>58.1</td><td>80.5</td><td>63.6</td><td>53.6</td><td>48.8</td><td>40.3</td><td>51.1</td><td>54.1</td><td>53.4</td></tr><tr><td>ShareGPT4V-13B</td><td>67.4</td><td>80.0</td><td>65.4</td><td>56.5</td><td>49.4</td><td>49.8</td><td>60.0</td><td>54.6</td><td>56.3</td></tr><tr><td>LLaVA-v1.6-vicuna-7B</td><td>60.5</td><td>80.9</td><td>57.4</td><td>55.8</td><td>51.4</td><td>43.8</td><td>55.6</td><td>58.6</td><td>55.8</td></tr><tr><td>LLaVA-v1.6-vicuna-13B</td><td>66.3</td><td>85.9</td><td>64.8</td><td>59.3</td><td>58.2</td><td>53.7</td><td>65.9</td><td>60.0</td><td>61.5</td></tr><tr><td>LLaVA-v1.6-34B</td><td>80.2</td><td>92.7</td><td>83.3</td><td>74.8</td><td>68.4</td><td>66.9</td><td>68.8</td><td>74.2</td><td>73.7</td></tr><tr><td>CogVLM-Chat</td><td>73.3</td><td>87.7</td><td>75.9</td><td>66.9</td><td>58.5</td><td>53.2</td><td>63.0</td><td>63.7</td><td>64.2</td></tr><tr><td>CogVLM2-Llama3-Chat</td><td>73.3</td><td>92.3</td><td>86.4</td><td>76.0</td><td>71.9</td><td>62.9</td><td>67.4</td><td>71.9</td><td>73.5</td></tr><tr><td>InternVL2-26B</td><td>81.4</td><td>91.8</td><td>82.7</td><td>78.2</td><td>71.9</td><td>67.2</td><td>66.7</td><td>72.0</td><td>74.4</td></tr><tr><td>Qwen2-VL-7B</td><td>80.2</td><td>90.0</td><td>82.1</td><td>75.1</td><td>66.4</td><td>63.2</td><td>72.6</td><td>69.8</td><td>71.6</td></tr><tr><td>LLaVA-OV-7B-Chat</td><td>81.4</td><td>93.2</td><td>85.2</td><td>75.1</td><td>71.6</td><td>69.4</td><td>71.9</td><td>74.5</td><td>75.4</td></tr><tr><td>GPT-4V</td><td>70.9</td><td>81.8</td><td>72.8</td><td>63.7</td><td>63.4</td><td>66.9</td><td>68.9</td><td>69.2</td><td>68.0</td></tr><tr><td>GPT-4o</td><td>81.4</td><td>90.9</td><td>81.5</td><td>68.5</td><td>75.5</td><td>74.1</td><td>83.7</td><td>77.1</td><td>77.1</td></tr><tr><td>Human</td><td>98.8</td><td>95.9</td><td>98.8</td><td>94.3</td><td>95.6</td><td>96.0</td><td>96.3</td><td>93.3</td><td>95.3</td></tr></table>
|
| 278 |
+
|
| 279 |
+
Table 5: Model performance on the VQA task. Each QA contains four options, with a chance rate of $25\%$ . Numbers are presented in $\%$ with a full score of $100\%$ .
|
| 280 |
+
|
| 281 |
+
2023), MMBench (Liu et al., 2023c) and SEED Bench (Li et al., 2023a) use True/False questions or multiple-choice questions to evaluate different abilities. MM-VET (Yu et al., 2024) evaluates LVLMs in terms of their integrated VL capabilities with open-ended questions. In contrast, CogBench focuses on high-level cognitive evaluation. Though some of them also consider cognition as one of the evaluation dimensions, they do not provide a comprehensive evaluation and most images they use evaluate only limited aspects of reasoning.
|
| 282 |
+
|
| 283 |
+
Image Captioning. Image Captioning is a classical VL task (Zhou et al., 2022). As model capabilities advance, researchers strive to enhance their ability to describe images in detail. Krause et al. (2017) propose Image Paragraph Captioning, tasking models with generating a descriptive paragraph for an image. Recently, some researchers (Xie et al., 2022; Zhu et al., 2023a; Zhuge et al., 2023; Chen et al., 2023a) have been leveraging the ability of LLMs to generate more detailed image descriptions. None of these tasks considers evaluating the high-level cognitive abilities of models through description. HL dataset (Cafagna et al., 2023) requires models to generate high-level captions, but it only considers three aspects (scene, action, rationale). The content of most images in existing datasets does not reach the level of a story. This reveals the need for higher-level datasets like CogBench.
|
| 284 |
+
|
| 285 |
+
Visual Reasoning. Visual Reasoning task is closely related to the cognitive abilities of models.
|
| 286 |
+
|
| 287 |
+
Visual Commonsense Reasoning (VCR) (Zellers et al., 2019) tasks models with answering visual questions using commonsense reasoning and justifying their answers. VisualCOMET (Park et al., 2020) is a framework of visual commonsense reasoning tasks to predict past, future events, and present intents. Hessel et al. (2022) utilize images from VCR and Visual Genome (Krishna et al., 2017) to evaluate the ability of models to perform abductive reasoning. Fu et al. (2022) propose a task to identify the time and location of a given image. CURE (Chen et al., 2023b) is proposed to measure both the zero-shot reasoning performance and consistency of VLMs. Similarly, these tasks consider fewer kinds of reasoning, andCogBench can be seen as the next step of these efforts.
|
| 288 |
+
|
| 289 |
+
# 5 Conclusion
|
| 290 |
+
|
| 291 |
+
In this paper, we incorporate the concept of the Cookie Theft picture description task into the evaluation of high-level cognitive abilities of LVLMs and design a novel evaluation benchmark called CogBench. The images in CogBench are of high quality and require more complex cognitive reasoning for interpretation, distinguishing it from existing image datasets. Experiments reveal a significant gap between the cognitive abilities of LVLMs and humans, highlighting CogBench as a challenging benchmark.
|
| 292 |
+
|
| 293 |
+
# Limitations
|
| 294 |
+
|
| 295 |
+
Given the scarcity of images that meet our data collection standards, the number of images in Cog-
|
| 296 |
+
|
| 297 |
+
Bench is relatively limited. However, the number of images in CogBench is comparable to existing evaluation benchmarks, such as MM-VET, and is sufficient to serve as a reliable evaluation benchmark.
|
| 298 |
+
|
| 299 |
+
# Ethical Considerations
|
| 300 |
+
|
| 301 |
+
Most images in CogBench are manually collected from Pinterest in accordance with its terms of service. The images are used under fair use for research purposes only and we will share our dataset with researchers who adhere to the ethical considerations established in this study. During the annotation process, we ensure that our annotators receive a fair wage and promptly address any questions they have. They are free to take breaks or quit the annotation task at any time.
|
| 302 |
+
|
| 303 |
+
# Acknowledgments
|
| 304 |
+
|
| 305 |
+
This work has been supported by the China NSFC Project (No. U23B2018). Kenny Q. Zhu was partly supported by NSF Award No. 2349713.
|
| 306 |
+
|
| 307 |
+
# References
|
| 308 |
+
|
| 309 |
+
Donna Rose Addis, Alana T. Wong, and Daniel L. Schacter. 2007. Remembering the past and imagining the future: Common and distinct neural substrates during event construction and elaboration. Neuropsychologia, 45(7):1363-1377.
|
| 310 |
+
Jinze Bai, Shuai Bai, Shusheng Yang, Shijie Wang, Sinan Tan, Peng Wang, Junyang Lin, Chang Zhou, and Jingren Zhou. 2023. Qwen-vl: A versatile vision-language model for understanding, localization, text reading, and beyond. Preprint, arXiv:2308.12966.
|
| 311 |
+
Satanjeev Banerjee and Alon Lavie. 2005. METEOR: An automatic metric for MT evaluation with improved correlation with human judgments. In Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization, pages 65-72, Ann Arbor, Michigan. Association for Computational Linguistics.
|
| 312 |
+
Sebastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, et al. 2023. Sparks of artificial general intelligence: Early experiments with GPT-4. arXiv preprint arXiv:2303.12712.
|
| 313 |
+
Lindsey J. Byom and Bilge Mutlu. 2013. Theory of mind: Mechanisms, methods, and new directions. Frontiers in Human Neuroscience, 7.
|
| 314 |
+
|
| 315 |
+
Michele Cafagna, Kees van Deemter, and Albert Gatt. 2023. Hl dataset: visually-grounded description of scenes, actions and rationales. In Proceedings of the 16th International Natural Language Generation Conference, pages 293-312.
|
| 316 |
+
Lin Chen, Jisong Li, Xiaoyi Dong, Pan Zhang, Conghui He, Jiaqi Wang, Feng Zhao, and Dahua Lin. 2023a. ShareGPT4V: Improving large multimodal models with better captions. arXiv preprint arXiv:2311.12793.
|
| 317 |
+
Yangyi Chen, Karan Sikka, Michael Cogswell, Heng Ji, and Ajay Divakaran. 2023b. Measuring and improving chain-of-thought reasoning in vision-language models. arXiv preprint arXiv:2309.04461.
|
| 318 |
+
Zhe Chen, Jiannan Wu, Wenhai Wang, Weijie Su, Guo Chen, Sen Xing, Muyan Zhong, Qinglong Zhang, Xizhou Zhu, Lewei Lu, Bin Li, Ping Luo, Tong Lu, Yu Qiao, and Jifeng Dai. 2023c. Internvl: Scaling up vision foundation models and aligning for generic visual-linguistic tasks. arXiv preprint arXiv:2312.14238.
|
| 319 |
+
Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E Gonzalez, et al. 2023. Vicuna: An open-source chatbot impressing GPT-4 with $90\%$ ChatGPT quality. See https://vicuna.lmsys.org (accessed 14 April 2023).
|
| 320 |
+
Louise Cummings. 2019. Describing the cookie theft picture: Sources of breakdown in alzheimer's dementia. *Pragmatics and Society*, 10:151-174.
|
| 321 |
+
Wenliang Dai, Junnan Li, Dongxu Li, Anthony Meng Huat Tiong, Junqi Zhao, Weisheng Wang, Boyang Li, Pascale Fung, and Steven Hoi. 2023. Instructlip: Towards general-purpose vision-language models with instruction tuning. Preprint, arXiv:2305.06500.
|
| 322 |
+
Chaoyou Fu, Peixian Chen, Yunhang Shen, Yulei Qin, Mengdan Zhang, Xu Lin, Jinrui Yang, Xiawu Zheng, Ke Li, Xing Sun, et al. 2023. Mme: A comprehensive evaluation benchmark for multimodal large language models. arXiv preprint arXiv:2306.13394.
|
| 323 |
+
Xingyu Fu, Ben Zhou, Ishaan Chandratreya, Carl Vondrick, and Dan Roth. 2022. There's a time and place for reasoning beyond the image. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1138-1149, Dublin, Ireland. Association for Computational Linguistics.
|
| 324 |
+
Harold Goodglass, Edith Kaplan, and Sandra Weintraub. 2001. BDAE: The Boston Diagnostic Aphasia Examination. Lippincott Williams & Wilkins, Philadelphia, PA.
|
| 325 |
+
Pengcheng He, Jianfeng Gao, and Weizhu Chen. 2023. DeBERTav3: Improving deBERTa using ELECTRA-style pre-training with gradient-disentangled embedding sharing. In The Eleventh International Conference on Learning Representations.
|
| 326 |
+
|
| 327 |
+
Pengcheng He, Xiaodong Liu, Jianfeng Gao, and Weizhu Chen. 2021. Deberta: Decoding-enhanced bert with disentangled attention. In International Conference on Learning Representations.
|
| 328 |
+
Jack Hessel, Jena D Hwang, Jae Sung Park, Rowan Zellers, Chandra Bhagavatula, Anna Rohrbach, Kate Saenko, and Yejin Choi. 2022. The abduction of sherlock holmes: A dataset for visual abductive reasoning. In European Conference on Computer Vision, pages 558-575. Springer.
|
| 329 |
+
Wenyi Hong, Weihan Wang, Ming Ding, Wenmeng Yu, Qingsong Lv, Yan Wang, Yean Cheng, Shiyu Huang, Junhui Ji, Zhao Xue, et al. 2024. Cogvlm2: Visual language models for image and video understanding. arXiv preprint arXiv:2408.16500.
|
| 330 |
+
Jonathan Krause, Justin Johnson, Ranjay Krishna, and Li Fei-Fei. 2017. A hierarchical approach for generating descriptive image paragraphs. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 317-325.
|
| 331 |
+
Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin Johnson, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A Shamma, et al. 2017. Visual genome: Connecting language and vision using crowdsourced dense image annotations. International journal of computer vision, 123:32-73.
|
| 332 |
+
Bo Li, Yuanhan Zhang, Dong Guo, Renrui Zhang, Feng Li, Hao Zhang, Kaichen Zhang, Yanwei Li, Ziwei Liu, and Chunyuan Li. 2024. Llavaonevision: Easy visual task transfer. arXiv preprint arXiv:2408.03326.
|
| 333 |
+
Bohao Li, Rui Wang, Guangzhi Wang, Yuying Ge, Yixiao Ge, and Ying Shan. 2023a. Seed-bench: Benchmarking multimodal LLMs with generative comprehension. arXiv preprint arXiv:2307.16125.
|
| 334 |
+
Junnan Li, Dongxu Li, Silvio Savarese, and Steven Hoi. 2023b. Blip-2: Bootstrapping language-image pretraining with frozen image encoders and large language models. arXiv preprint arXiv:2301.12597.
|
| 335 |
+
Yifan Li, Yifan Du, Kun Zhou, Jinping Wang, Xin Zhao, and Ji-Rong Wen. 2023c. Evaluating object hallucination in large vision-language models. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 292-305, Singapore. Association for Computational Linguistics.
|
| 336 |
+
Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out, pages 74-81.
|
| 337 |
+
Haotian Liu, Chunyuan Li, Yuheng Li, and Yong Jae Lee. 2023a. Improved baselines with visual instruction tuning. Preprint, arXiv:2310.03744.
|
| 338 |
+
Haotian Liu, Chunyuan Li, Yuheng Li, Bo Li, Yuanhan Zhang, Sheng Shen, and Yong Jae Lee. 2024. Llavanext: Improved reasoning,OCR, and world knowledge.
|
| 339 |
+
|
| 340 |
+
Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. 2023b. Visual instruction tuning. In NeurIPS.
|
| 341 |
+
Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, et al. 2023c. Mmbench: Is your multi-modal model an all-around player? arXiv preprint arXiv:2307.06281.
|
| 342 |
+
Jordi A. Matias-Guiu, Paz Suárez-Coalla, Miguel Yus, Vanesa Pytel, Laura Hernández-Lorenzo, Cristina Delgado-Alonso, Alfonso Delgado-Álvarez, Natividad Gómez-Ruiz, Carmen Polidura, María Nieves Cabrera-Martín, Jorge Matías-Guiu, and Fernando Cuetos. 2022. Identification of the main components of spontaneous speech in primary progressive aphasia and their neural underpinnings using multimodal mri and fdg-pet imaging. Cortex, 146:141-160.
|
| 343 |
+
Kimberly D Mueller, Bruce Hermann, Jonilda Mecollari, and Lyn S Turkstra. 2018. Connected speech and language in mild cognitive impairment and alzheimer's disease: A review of picture description tasks. Journal of clinical and experimental neuropsychology, 40(9):917-939.
|
| 344 |
+
OpenAI. 2023. GPT-4 technical report. Preprint, arXiv:2303.08774.
|
| 345 |
+
Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting of the Association for Computational Linguistics, pages 311-318.
|
| 346 |
+
Jae Sung Park, Chandra Bhagavatula, Roozbeh Mottaghi, Ali Farhadi, and Yejin Choi. 2020. Visualcomet: Reasoning about the dynamic context of a still image. In In Proceedings of the European Conference on Computer Vision (ECCV).
|
| 347 |
+
Thibault Sellam, Dipanjan Das, and Ankur P Parikh. 2020. Bleurt: Learning robust metrics for text generation. In Proceedings of ACL.
|
| 348 |
+
Guangyao Shen, Xin Wang, Xuguang Duan, Hongzhi Li, and Wenwu Zhu. 2020. Memor: A dataset for multimodal emotion reasoning in videos. In Proceedings of the 28th ACM International Conference on Multimedia, MM '20, page 493-502, New York, NY, USA. Association for Computing Machinery.
|
| 349 |
+
Mashrura Tasnim, Malikeh Ehghaghi, Brian Diep, and Jekaterina Novikova. 2022. DEPAC: a corpus for depression and anxiety detection from speech. In Proceedings of the Eighth Workshop on Computational Linguistics and Clinical Psychology, pages 1-16, Seattle, USA. Association for Computational Linguistics.
|
| 350 |
+
Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. 2023. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971.
|
| 351 |
+
|
| 352 |
+
Ramakrishna Vedantam, C Lawrence Zitnick, and Devi Parikh. 2015. Cider: Consensus-based image description evaluation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4566-4575.
|
| 353 |
+
Peng Wang, Shuai Bai, Sinan Tan, Shijie Wang, Zhihao Fan, Jinze Bai, Keqin Chen, Xuejing Liu, Jialin Wang, Wenbin Ge, Yang Fan, Kai Dang, Mengfei Du, Xuancheng Ren, Rui Men, Dayiheng Liu, Chang Zhou, Jingren Zhou, and Junyang Lin. 2024. Qwen2vl: Enhancing vision-language model's perception of the world at any resolution. arXiv preprint arXiv:2409.12191.
|
| 354 |
+
Weihan Wang, Qingsong Lv, Wenmeng Yu, Wenyi Hong, Ji Qi, Yan Wang, Junhui Ji, Zhuoyi Yang, Lei Zhao, Xixuan Song, Jiazheng Xu, Bin Xu, Juanzi Li, Yuxiao Dong, Ming Ding, and Jie Tang. 2023. Cogvlm: Visual expert for pretrained language models. Preprint, arXiv:2311.03079.
|
| 355 |
+
Yujia Xie, Luowei Zhou, Xiyang Dai, Lu Yuan, Nguyen Bach, Ce Liu, and Michael Zeng. 2022. Visual clues: Bridging vision and language foundations for image paragraph captioning. Advances in Neural Information Processing Systems, 35:17287-17300.
|
| 356 |
+
Peng Xu, Wenqi Shao, Kaipeng Zhang, Peng Gao, Shuo Liu, Meng Lei, Fanqing Meng, Siyuan Huang, Yu Qiao, and Ping Luo. 2023. Lvlm-ehub: A comprehensive evaluation benchmark for large vision-language models. arXiv preprint arXiv:2306.09265.
|
| 357 |
+
Zhengyuan Yang, Linjie Li, Kevin Lin, Jianfeng Wang, Chung-Ching Lin, Zicheng Liu, and Lijuan Wang. 2023. The dawn of Imms: Preliminary explorations with GPT-4V (ision). arXiv preprint arXiv:2309.17421, 9(1):1.
|
| 358 |
+
Qinghao Ye, Haiyang Xu, Jiabo Ye, Ming Yan, Haowei Liu, Qi Qian, Ji Zhang, Fei Huang, and Jingren Zhou. 2023. MPLug-owl2: Revolutionizing multi-modal large language model with modality collaboration. arXiv preprint arXiv:2311.04257.
|
| 359 |
+
Wenpeng Yin, Dragomir Radev, and Caiming Xiong. 2021. DocNLI: A large-scale dataset for document-level natural language inference. In *Findings of the Association for Computational Linguistics: ACLIJCNLP* 2021, pages 4913-4922, Online. Association for Computational Linguistics.
|
| 360 |
+
Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, and Lijuan Wang. 2024. Mm-vet: Evaluating large multimodal models for integrated capabilities. In International conference on machine learning. PMLR.
|
| 361 |
+
Rowan Zellers, Yonatan Bisk, Ali Farhadi, and Yejin Choi. 2019. From recognition to cognition: Visual commonsense reasoning. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
|
| 362 |
+
|
| 363 |
+
Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q. Weinberger, and Yoav Artzi. 2020. Bertscore: Evaluating text generation with BERT. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net.
|
| 364 |
+
Zelin Zhou, Zhiling Zhang, Xuanan Xu, Zeyu Xie, Mengyue Wu, and Kenny Q Zhu. 2022. Can audio captions be evaluated with image caption metrics? In ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 981-985. IEEE.
|
| 365 |
+
Deyao Zhu, Jun Chen, Kilichbek Haydarov, Xiaogian Shen, Wenxuan Zhang, and Mohamed Elhoseiny. 2023a. ChatGPT asks, blip-2 answers: Automatic questioning towards enriched visual descriptions. arXiv preprint arXiv:2303.06594.
|
| 366 |
+
Deyao Zhu, Jun Chen, Xiaoqian Shen, Xiang Li, and Mohamed Elhoseiny. 2023b. MiniGPT-4: Enhancing vision-language understanding with advanced large language models. arXiv preprint arXiv:2304.10592.
|
| 367 |
+
Yan Zhuang, Qi Liu, Yuting Ning, Weizhe Huang, Rui Lv, Zhenya Huang, Guanhao Zhao, Zheng Zhang, Qingyang Mao, Shijin Wang, et al. 2023. Efficiently measuring the cognitive ability of LLMs: An adaptive testing perspective. arXiv preprint arXiv:2306.10512.
|
| 368 |
+
Mingchen Zhuge, Haozhe Liu, Francesco Faccio, Dylan R Ashley, Robert Csordas, Anand Gopalakrishnan, Abdullah Hamdi, Hasan Hammoud, Vincent Herrmann, Kazuki Irie, Louis Kirsch, Bing Li, Guohao Li, Shuming Liu, Jinjie Mai, Piotr Piekos, Aditya Ramesh, Imanol Schlag, Weimin Shi, Aleksandar Stanić, Wenyi Wang, Yuhui Wang, Mengmeng Xu, Deng-Ping Fan, Bernard Ghanem, and Jürgen Schmidhuber. 2023. Mindstorms in natural language-based societies of mind. arXiv preprint arXiv:2305.17066.
|
| 369 |
+
|
| 370 |
+
# A Image Annotation Instruction
|
| 371 |
+
|
| 372 |
+
Figure 6 shows the image annotation instruction for annotators of CogBench.
|
| 373 |
+
|
| 374 |
+
# B Prompt of CoR-based GPT-assisted Question Generation
|
| 375 |
+
|
| 376 |
+
Figure 7 shows an example prompt of the CoR-based GPT-assisted question generation method for GPT-4. This prompt is used to generate questions based on [Event Reasoning] CoRs. Prompts for other reasoning types are similar to this one.
|
| 377 |
+
|
| 378 |
+
# C Introduction to Selected LVLMs
|
| 379 |
+
|
| 380 |
+
<table><tr><td>Model</td><td>Visual Encoder</td><td>Language Model</td></tr><tr><td>InstructBLIP-7B</td><td>EVA-G</td><td>Vicuna-7B</td></tr><tr><td>Qwen-VL-Chat</td><td>ViT-G/16</td><td>Qwen-7B</td></tr><tr><td>LLaVA-v1.5-7B</td><td>CLIP ViT-L/14</td><td>Vicuna-v1.5-7B</td></tr><tr><td>LLaVA-v1.5-13B</td><td>CLIP ViT-L/14</td><td>Vicuna-v1.5-13B</td></tr><tr><td>mPLUG-Owl-2</td><td>CLIP ViT-L/14</td><td>LLaMA2-7B</td></tr><tr><td>ShareGPT4V-7B</td><td>CLIP ViT-L/14</td><td>Vicuna-v1.5-7B</td></tr><tr><td>ShareGPT4V-13B</td><td>CLIP ViT-L/14</td><td>Vicuna-v1.5-13B</td></tr><tr><td>LLaVA-v1.6-vicuna-7B</td><td>CLIP ViT-L/14</td><td>Vicuna-v1.5-7B</td></tr><tr><td>LLaVA-v1.6-vicuna-13B</td><td>CLIP ViT-L/14</td><td>Vicuna-v1.5-13B</td></tr><tr><td>LLaVA-v1.6-34B</td><td>CLIP ViT-L/14</td><td>Nous-Hermes-2-Yi-34B</td></tr><tr><td>CogVLM-Chat</td><td>EVA2-CLIP-E</td><td>Vicuna-v1.5-7B</td></tr><tr><td>CogVLM2-Llama3-Chat</td><td>EVA2-CLIP-E</td><td>LLaMA3-8B</td></tr><tr><td>InternVL2-26B</td><td>InternViT-6B</td><td>InternLM2-Chat-20B</td></tr><tr><td>Qwen2-VL-7B</td><td>QwenViT</td><td>Qwen2-7B</td></tr><tr><td>LLaVA-OV-7B-Chat</td><td>SigLIP-400M</td><td>Qwen2-7B</td></tr><tr><td>GPT-4V</td><td>-</td><td>-</td></tr><tr><td>GPT-4o</td><td>-</td><td>-</td></tr></table>
|
| 381 |
+
|
| 382 |
+
Table 6: LVLMs evaluated in this paper.
|
| 383 |
+
|
| 384 |
+
- InstructBLIP (Dai et al., 2023) is an extension of BLIP-2 (Li et al., 2023b), designed to tackle the challenges of vision-language instruction tuning. It consists of an image encoder, an LLM, and a Q-Former. We use "clip2_vicuna_instruct" + "vicuna7b" for testing.
|
| 385 |
+
- Qwen-VL-Chat (Bai et al., 2023) is the instruction-tuned VL chatbot based on Qwen-VL, which consists of a visual encoder, an LLM, and a position-aware vision-language adapter. Its training process consists of two pre-training stages followed by a final instruction fine-tuning stage. We test "Qwen-VL-Chat" in the paper.
|
| 386 |
+
- LLaVA v1.5 (Liu et al., 2023a) is an upgraded version of LLaVA (Liu et al., 2023b), who connects a vision encoder and LLM for visual and language understanding. LLaVA is instruction-tuned on the language-image instruction-following data generated by
|
| 387 |
+
|
| 388 |
+
language-only GPT-4. By using CLIP-ViT-L-336px with an MLP projection and adding academic-task-oriented VQA data with simple response formatting prompts, LLaVA v1.5 achieves better performance. "llava-v1.5-7b" and "llava-v1.5-13b" are tested.
|
| 389 |
+
|
| 390 |
+
- mPLUG-Owl-2 (Ye et al., 2023) leverages modality collaboration to enhance performance across both text and multi-modal tasks. It adopts a modularized network design, with the language decoder serving as a universal interface to manage different modalities. We test "mplug-owl2-llama2-7b" in the paper.
|
| 391 |
+
- ShareGPT4V (Chen et al., 2023a) follows the design of LLaVA v1.5. It incorporates a large-scale resource featuring highly descriptive captions into both the pre-training and supervised fine-tuning phases. We test "ShareGPT4V-7B" and "ShareGPT4V-13B".
|
| 392 |
+
- LLaVA v1.6 (Liu et al., 2024) maintains the minimalist design and data efficiency of LLaVA-v1.5. It enhances LLaVA-v1.5 with dynamic high resolution, data mixture, and scaling of the LLM backbone. We test "llava-v1.6-vicuna-7b-hf", "llava-v1.6-vicuna-13b-hf" and "llava-v1.6-34b-hf" in the paper.
|
| 393 |
+
- CogVLM (Wang et al., 2023) comprises a ViT encoder, an MLP adapter, a pretrained LLM, and a visual expert module. Unlike the common shallow alignment method that maps image features to the input space of language model, CogVLM uses a trainable visual expert module in the attention and FFN layers to bridge the gap between the frozen pretrained language model and image encoder. "cogglm-chat-hf" is tested in the paper.
|
| 394 |
+
- CogVLM2 (Hong et al., 2024) inherits the visual expert architecture from CogVLM. Differently, CogVLM2 uses a $2 \times 2$ downsampling module to increase input resolution while maintaining efficiency, with LLaMA3-8B as its backbone. The pre-training and post-training data are also improved in terms of diversity and quality. "cogglm2-llama3-chat19B" is tested.
|
| 395 |
+
- InternVL2 (Chen et al., 2023c) family adopts a progressive alignment training strategy, resulting in the first vision foundation model
|
| 396 |
+
|
| 397 |
+
You are going to see some pictures. Each picture tells a story and requires different kinds of reasoning to fully understand it. You will be first asked to identify the entities and reasoning processes in the picture. Then, you will need to describe the story of the picture based on your identified entities and reasoning processes.
|
| 398 |
+
|
| 399 |
+
First, you will be asked to identify the entities in the picture. The annotation format is [A, B, C], where A, B, C are entities.
|
| 400 |
+
|
| 401 |
+
[Entities]: Please list the entities appearing in the picture, including people, animals, objects etc. You are encouraged to list as many entities as possible. Note that these entities need to be in your picture description afterwards. For entities that are difficult to recognize, please do not list them here or describe them.
|
| 402 |
+
|
| 403 |
+
Then, you will be asked to identify different reasoning processes in the picture. The annotation format should follow the structure $\mathrm{[A1 + A2\rightarrow B]}$ , where A1 and A2 are premises and B is the conclusion. Note that if you write a conclusion, there must be at least one premise. Do not write a conclusion only, like [B]. Please write one conclusion at a time, and do not write a reasoning process like $\mathrm{[A1\rightarrow B\rightarrow C]}$ , which should be split into two. Each picture does not necessarily require all kinds of reasoning. Please write None, if a picture does not involve a specific kind of reasoning or it is not important in the picture.
|
| 404 |
+
|
| 405 |
+
[Special Time Reasoning] Please write your reasoning processes about the special time of the story in the picture, e.g. festivals, seasons etc. The special time is usually relevant to the story of the picture. For instance, if it is daytime in a picture, it is easily recognized, requires no reasoning and there is nothing special, you can write None. However, if there is a lamp on or a clock indicating a specific time, you can write down your reasoning about the time.
|
| 406 |
+
|
| 407 |
+
[Location Reasoning] Please write your reasoning processes about the location of the story in the picture, e.g. near the school.
|
| 408 |
+
|
| 409 |
+
[Character Reasoning]: Please write your reasoning processes about the characters of the subjects in the picture, e.g. a teacher, a doctor etc.
|
| 410 |
+
|
| 411 |
+
[Character Relationship Reasoning]: Please write your reasoning processes about the relationships between the characters in the picture, e.g. mother-son relationship.
|
| 412 |
+
|
| 413 |
+
[Event Reasoning]: Please write your reasoning processes about the events in the current and previous moments in the picture based on the clues provided. Note that you only need to annotate those high-level events and can ignore the low-level ones. For instance, "the woman is looking at the man" is a low-level event and you can ignore its reasoning process. Differently, the reasoning process [A mother is busy cooking. + A boy is fetching cookies behind the mom. + A girl is shushing the boy. -> The boy is stealing cookies.] is about a high-level event "stealing" and you should write it down.
|
| 414 |
+
|
| 415 |
+
[Event Relationship Reasoning]: Please write your reasoning processes about the relationships between different events in the picture. These events are usually linked through causal and temporal relations. Note that events in this part do not necessarily appear in the [Event Reasoning] part as some events here are low-level events.
|
| 416 |
+
|
| 417 |
+
[Next Moment Event Reasoning]: Please write your reasoning processes about the events that will happen in the next moment. Note that you only need to write down events that have a very high probability of happening, instead of guessing what might happen next.
|
| 418 |
+
|
| 419 |
+
[mental state reasoning]: Please write your reasoning processes about the mental states of the subjects in the picture, e.g. daydreaming, happy, etc. You need to reason as best you can about the mental states of all the subjects in the picture, unless they are not showing obvious emotions.
|
| 420 |
+
|
| 421 |
+
Finally, you will be asked to describe the picture in as much detail as you can.
|
| 422 |
+
|
| 423 |
+
[Description]: Please describe all you see in the picture in a paragraph based on the entities and reasoning processes you identified above, ensuring that all of them are included in your description. Each picture has a story behind it and you need to tell that story through your description.
|
| 424 |
+
|
| 425 |
+
Figure 6: Image annotation instruction for annotators.
|
| 426 |
+
|
| 427 |
+
natively aligned with LLMs, and enabling efficient training of large models with limited resources. It supports multimodal input using a single parameter set and provides multitask output, including images, bounding boxes, and masks. By connecting the LVLM with multiple downstream task decoders, it can be generalized to many VL tasks. We test "InternVL2-26B" in this paper.
|
| 428 |
+
|
| 429 |
+
- Qwen2-VL (Wang et al., 2024) is built upon the Qwen-VL architecture. It introduces the Naive Dynamic Resolution mechanism, which allows the model to handle arbitrary image resolutions, dynamically adjusting the number of visual tokens. It also incorporates Multimodal Rotary Position Embedding (M-RoPE), enabling the effective fusion of positional information across text, images, and videos. "Qwen2-VL-7B-Instruct" is tested.
|
| 430 |
+
- LLaVA-OneVision-Chat (Li et al., 2024) is an enhanced version of LLaVA-OneVision, with improvements achieved through preference alignment for better visual-chat capabilities. LLaVA-OneVision is developed by integrating insights into data, models, and visual representations in the LLaVA v1.6 series. "llava-onevision-qwen2-7b-ov-chat" is tested and we use LLaVA-OV-Chat to refer to LLaVA-OneVision-Chat in this paper.
|
| 431 |
+
- GPT-4V (OpenAI, 2023) is a powerful LVLM developed by OpenAI. The version of "gpt-4-turbo" is tested.
|
| 432 |
+
- GPT-4o (OpenAI, 2023) is currently one of the most powerful multimodal models. It is a single model trained end-to-end across text, vision, and audio. The version of "gpt-4o" is tested.
|
| 433 |
+
|
| 434 |
+
Table 6 shows an overview of the designs of different LVLMs.
|
| 435 |
+
|
| 436 |
+
# D Prompts for GPT-based Cognition Evaluation of the Description Task
|
| 437 |
+
|
| 438 |
+
Figure 8 and Figure 9 show the prompts used for cognition evaluation of the Description task for GPT-4 (or ChatGPT).
|
| 439 |
+
|
| 440 |
+
# E Evaluation of LVLMs on the Description Task Using Traditional Image Captioning Metrics
|
| 441 |
+
|
| 442 |
+
Table 7 shows the model performance on traditional image captioning evaluation metrics. Following Krause et al. (2017), we use METEOR (Banerjee and Lavie, 2005), CIDEr (Vedantam et al., 2015), BLEU-1, BLEU-2, BLEU-3, and BLEU-4 (Papineni et al., 2002) to evaluate the model performance on the Description task. Similar to the findings of Zhu et al. (2023a), it can be observed that traditional image captioning evaluation metrics are not quite suitable for evaluating the Description task. There are two possible reasons. The first possible reason is that image descriptions are longer and more flexible than traditional image captions. The second possible reason is that our Description task requires evaluation metrics to place more emphasis on high-level semantics in the description.
|
| 443 |
+
|
| 444 |
+
# F Implementation of Non-GPT-Based Cognition Evaluation Methods for the Description Task
|
| 445 |
+
|
| 446 |
+
The cognition evaluation of the Description task is to determine whether each CoR is mentioned in the description. Apart from ChatGPT or GPT-4, some other evaluation methods are also implemented to perform this classification task, as shown in Table 4.
|
| 447 |
+
|
| 448 |
+
For methods based on ROUGE (Lin, 2004), BERTScore (Zhang et al., 2020), and BLEURT (Sellam et al., 2020), we first split the description into sentences and then use each CoR as the reference to calculate the (recall) score for each sentence compared to the CoR. Then, the highest score among all calculated scores is taken as the score of the CoR corresponding to the description. Finally, the score is converted into 0 or 1 using a threshold.
|
| 449 |
+
|
| 450 |
+
We also tried Natural Language Inference (NLI) models to perform the task. First, we use DeBERTa (He et al., 2021, 2023) to perform a sentence-level NLI task similar to the method mentioned above. If there is at least one "Entailment" for all the sentences, the score of the CoR will be 1. The model we adopt is mDeBERTa-v3-base-xnli-multilingual-nli-2mil7. The second NLI model we tried is Doc-NLI (Yin et al., 2021), which can directly take the description and CoR as input and perform the classification task.
|
| 451 |
+
|
| 452 |
+
<table><tr><td>Model</td><td>METEOR</td><td>CIDEr</td><td>BLEU-1</td><td>BLEU-2</td><td>BLEU-3</td><td>BLEU-4</td></tr><tr><td>InstructBLIP-7B</td><td>0.130 / 0.183</td><td>0.043 / 0.003</td><td>0.255 / 0.218</td><td>0.127 / 0.104</td><td>0.063 / 0.049</td><td>0.033 / 0.024</td></tr><tr><td>Qwen-VL-Chat</td><td>0.130 / 0.196</td><td>0.037 / 0.016</td><td>0.242 / 0.262</td><td>0.124 / 0.138</td><td>0.059 / 0.069</td><td>0.030 / 0.036</td></tr><tr><td>LLaVA-V1.5-7B</td><td>0.146 / 0.182</td><td>0.054 / 0.020</td><td>0.309 / 0.275</td><td>0.158 / 0.138</td><td>0.076 / 0.065</td><td>0.037 / 0.032</td></tr><tr><td>LLaVA-V1.5-13B</td><td>0.146 / 0.176</td><td>0.051 / 0.018</td><td>0.312 / 0.274</td><td>0.160 / 0.137</td><td>0.076 / 0.066</td><td>0.037 / 0.034</td></tr><tr><td>mPLUG-Owl-2</td><td>0.132 / 0.184</td><td>0.035 / 0.012</td><td>0.260 / 0.235</td><td>0.126 / 0.116</td><td>0.057 / 0.053</td><td>0.027 / 0.025</td></tr><tr><td>ShareGPT4V-7B</td><td>0.162 / 0.191</td><td>0.017 / 0.014</td><td>0.259 / 0.222</td><td>0.120 / 0.113</td><td>0.050 / 0.053</td><td>0.024 / 0.027</td></tr><tr><td>ShareGPT4V-13B</td><td>0.165 / 0.184</td><td>0.024 / 0.015</td><td>0.278 / 0.254</td><td>0.129 / 0.126</td><td>0.055 / 0.059</td><td>0.026 / 0.030</td></tr><tr><td>LLaVA-v1.6-vicuna-7B</td><td>0.169 / 0.190</td><td>0.026 / 0.000</td><td>0.278 / 0.159</td><td>0.134 / 0.085</td><td>0.060 / 0.041</td><td>0.029 / 0.021</td></tr><tr><td>LLaVA-v1.6-vicuna-13B</td><td>0.172 / 0.195</td><td>0.023 / 0.000</td><td>0.283 / 0.166</td><td>0.137 / 0.090</td><td>0.061 / 0.043</td><td>0.030 / 0.021</td></tr><tr><td>LLaVA-v1.6-34B</td><td>0.172 / 0.199</td><td>0.026 / 0.000</td><td>0.279 / 0.173</td><td>0.134 / 0.092</td><td>0.060 / 0.044</td><td>0.030 / 0.021</td></tr><tr><td>CogVLM-Chat</td><td>0.157 / 0.174</td><td>0.069 / 0.056</td><td>0.297 / 0.326</td><td>0.151 / 0.154</td><td>0.075 / 0.069</td><td>0.039 / 0.034</td></tr><tr><td>CogVLM2-Llama3-Chat</td><td>0.176 / 0.191</td><td>0.059 / 0.032</td><td>0.356 / 0.257</td><td>0.183 / 0.127</td><td>0.091 / 0.059</td><td>0.049 / 0.029</td></tr><tr><td>InternVL2-26B</td><td>0.210 / 0.202</td><td>0.008 / 0.000</td><td>0.250 / 0.166</td><td>0.136 / 0.091</td><td>0.070 / 0.044</td><td>0.037 / 0.023</td></tr><tr><td>Qwen2-VL-7B</td><td>0.205 / 0.183</td><td>0.007 / 0.000</td><td>0.222 / 0.154</td><td>0.121 / 0.084</td><td>0.061 / 0.041</td><td>0.032 / 0.021</td></tr><tr><td>LLaVA-OV-7B-Chat</td><td>0.198 / 0.192</td><td>0.009 / 0.000</td><td>0.245 / 0.138</td><td>0.118 / 0.072</td><td>0.051 / 0.033</td><td>0.024 / 0.016</td></tr><tr><td>GPT-4V</td><td>0.189 / 0.191</td><td>0.013 / 0.000</td><td>0.250 / 0.153</td><td>0.113 / 0.071</td><td>0.047 / 0.028</td><td>0.022 / 0.011</td></tr><tr><td>GPT-4o</td><td>0.182 / 0.189</td><td>0.070 / 0.000</td><td>0.332 / 0.145</td><td>0.161 / 0.074</td><td>0.077 / 0.033</td><td>0.039 / 0.016</td></tr></table>
|
| 453 |
+
|
| 454 |
+
Table 7: Model performance on the Description task evaluated using traditional image captioning metrics. The results of the evaluation under the Spontaneous Description mode and Directed Reasoning mode are presented before and after the “/” in each table cell.
|
| 455 |
+
|
| 456 |
+
We have a description of an image and the description tells a detailed story unfolding in the image. In the process of describing an image, it is often necessary to engage in reasoning about events based on the clues within the image, leading to certain conclusions. For example, when we see the wind is blowing outside, and a man is reading a newspaper in the telephone booth, we can infer that he is actually hiding from the wind in the telephone booth. Therefore, in this task, in addition to the image description, the reasoning processes about event within the image description have also been extracted. For each reasoning process, we use $\mathrm{A1 + A2 + \ldots - > B}$ to represent it, where A1, A2, ... are clues we observed in the picture and B represents the conclusion about event we inferred.
|
| 457 |
+
|
| 458 |
+
Thus, given an image description and the reasoning processes about event, our task is:
|
| 459 |
+
|
| 460 |
+
1) Generate a question based on reasoning processes about event.
|
| 461 |
+
2) Generate four options: A, B, C, and D. There is only one correct answer among the four options, which is consistent with the description and reasoning processes provided. The correct answer option should be randomly chosen from A, B, C, and D. For those incorrect options (distractors), you are encouraged to hallucinate some clues that are highly relevant to the question and the description but do not actually consistent with the description. That is, you can distort the facts in the description and reasoning processes using elements related to the question to generate some easily selectable distractors. It would be better if you can generate some distractors that are similar to but different from the correct option. Please avoid situations where the correct option is significantly longer or shorter than the distractors.
|
| 462 |
+
|
| 463 |
+
For example, if the description is "There are some snow on the ground and it is windy, ... We can see it is cold. Inside a phone booth, a man is smiling while looking at a newspaper. He is sheltering from the cold wind in the phone booth..." and the question is "Why can we tell that the man is seeking shelter for warmth?", you can use "newsstand", which is related to "seeking shelter for warmth" in the question, to distort the fact in description "in a phone booth." Then you can get "the man is in the newsstand." Similarly, you can hallucinate a question related distractor "it is raining and a man is smiling and reading a newspaper in a phone booth," which is similar to the correct option "it is windy and a man is smiling and reading a newspaper in a phone booth," but different from it and inconsistent with the description.
|
| 464 |
+
|
| 465 |
+
3) Generate the letter corresponding to the correct answer, that is A, B, C, or D.
|
| 466 |
+
|
| 467 |
+
Here are some examples:
|
| 468 |
+
|
| 469 |
+
[Description]:
|
| 470 |
+
|
| 471 |
+
There are some snow on the ground and it is windy, indicating it is winter. There are two men and two women standing on the roadside. There is a sign that says "NO STANDING BUS STOP", indicating it is near a bus stop. A man is standing on the road side, wrapping his coat tightly around himself, and peering out onto the road. They are probably waiting for a bus here. We can see it is cold. Inside a phone booth, a man is smiling while looking at a newspaper. He is sheltering from the cold wind in the phone booth. He looks happy, because it is warm there. Two women are also wrapping their coats tightly and looking at the man in the phone booth. They are probably friends and standing together. They are unhappy with the man. There are some buildings by the road.
|
| 472 |
+
|
| 473 |
+
[Event Reasoning]:
|
| 474 |
+
|
| 475 |
+
It is windy and cold. + A man is standing in a phone booth reading newspaper. -> The man is sheltering from the cold wind in the phone booth.
|
| 476 |
+
|
| 477 |
+
[Generated Multiple-Choice Questions]:
|
| 478 |
+
|
| 479 |
+
What is the man doing in the phone booth?
|
| 480 |
+
|
| 481 |
+
A. Making a phone call.
|
| 482 |
+
B. Reading a book.
|
| 483 |
+
C. Avoiding someone he doesn't want to see.
|
| 484 |
+
D. Sheltering from the cold wind.
|
| 485 |
+
Correct Answer: [D]
|
| 486 |
+
|
| 487 |
+
Why can we tell that the man is seeking shelter for warmth?
|
| 488 |
+
|
| 489 |
+
A. It is windy and a man is smiling and reading a newspaper in a newsstand.
|
| 490 |
+
B. It is raining and a man is smiling and reading a newspaper in a newsstand.
|
| 491 |
+
C. It is windy and a man is smiling and reading a newspaper in a phone booth.
|
| 492 |
+
D. It is raining and a man is smiling and reading a newspaper in a phone booth.
|
| 493 |
+
|
| 494 |
+
Correct Answer: [C]
|
| 495 |
+
|
| 496 |
+
Please:
|
| 497 |
+
|
| 498 |
+
1). Generate at least one question for each reasoning process.
|
| 499 |
+
2). Generate more diverse questions, try to generate questions from different perspectives or angles and don't limit yourself to the question templates provided in the examples.
|
| 500 |
+
3). Avoid generating repetitive questions with similar meanings.
|
| 501 |
+
|
| 502 |
+
Figure 7: An example prompt of CoR-based GPT-assisted question generation for GPT-4 to generate questions based on [Event Reasoning] CoRs.
|
| 503 |
+
|
| 504 |
+
Given a <DESCRIPTION> and some <KEY POINT>s, please tell me if the <DESCRIPTION> explicitly presents the exact or similar semantics of each <KEY POINT>. The following points are required:
|
| 505 |
+
|
| 506 |
+
1) Instead of reasoning about whether <KEY POINT> is possibly correct based on the <DESCRIP- TION>, you only need to determine whether the <DESCRIPTION> mentions the semantics in the <KEY POINT>.
|
| 507 |
+
2) Do not overlook the semantics in the <DESCRIP- TION> that are semantically equivalent to the <KEY POINT> but expressed in different ways. For instance, if the <DESCRIPTION> mentions "The woman is playing with her son...," we can tell it successfully includes semantics in the <KEY POINT> "The woman is the mother of the boy."
|
| 508 |
+
3) If several possible scenarios are listed using 'or' at a <KEY POINT>, you only need to determine whether one of these scenarios is mentioned in the <DESCRIPTION>.
|
| 509 |
+
|
| 510 |
+
Assign a score of 0 or 1 to each $<\mathrm{KEY~POINT}>.$ where 0 represents NO and 1 represents YES.
|
| 511 |
+
|
| 512 |
+
<DESCRIPTION>:
|
| 513 |
+
|
| 514 |
+
{Description generated by a model.}
|
| 515 |
+
|
| 516 |
+
<KEY POINT>:
|
| 517 |
+
|
| 518 |
+
1. {Annotated key point 1.}
|
| 519 |
+
2. {Annotated key point 2.}
|
| 520 |
+
···
|
| 521 |
+
N. {Annotated key point N.}
|
| 522 |
+
|
| 523 |
+
Please write your answers in “[ ]” with 0 or 1 in the following format (number + square brackets):
|
| 524 |
+
|
| 525 |
+
1. [1] 2. [0]
|
| 526 |
+
|
| 527 |
+
Your answers to the $\{\mathbf{N}\} < \mathrm{KEY}$ POINT $\rightharpoondown$ s) above:
|
| 528 |
+
|
| 529 |
+
1. [] 2. [] ... N. []
|
| 530 |
+
|
| 531 |
+
Given a <DESCRIPTION> and some <EVENT RELATIONSHIP>s, please tell me whether this <DESCRIPTION> clearly depicts the cause-and-effect relationships between events.
|
| 532 |
+
|
| 533 |
+
The format of a <EVENT RELATIONSHIP> follows the structure "A1 + A2 + ... + An -> B", where A1, A2, ..., An and B are events. Events A1, A2, ..., An are the causes of event B, and event B is the result caused by events A1, A2, ..., An. The criteria for judgment lie in whether the <DESCRIPTION> mentions these events and clearly depicts the causal relationships between them.
|
| 534 |
+
|
| 535 |
+
Assign a score of 0 or 1 to each <EVENT RELATIONSHIP>, where 0 represents NO and 1 represents YES.
|
| 536 |
+
|
| 537 |
+
<DESCRIPTION>:
|
| 538 |
+
|
| 539 |
+
{Description generated by a model.}
|
| 540 |
+
|
| 541 |
+
<EVENT RELATIONSHIP>:
|
| 542 |
+
|
| 543 |
+
1. {Annotated event relationship 1.}
|
| 544 |
+
2. {Annotated event relationship 2.}
|
| 545 |
+
···
|
| 546 |
+
N. {Annotated event relationship N.}
|
| 547 |
+
|
| 548 |
+
Please write your answers in “[ ]” with 0 or 1 in the following format (number + square brackets):
|
| 549 |
+
|
| 550 |
+
1. [1] 2. [0]
|
| 551 |
+
|
| 552 |
+
Your answers to the $\{\mathbf{N}\}$ <EVENT RELATIONSHIP>(s) above:
|
| 553 |
+
|
| 554 |
+
1. [] 2. [] ... N. []
|
| 555 |
+
|
| 556 |
+
Figure 9: Cognition evaluation prompt of [Event Relationship Reasoning].
|
| 557 |
+
|
| 558 |
+
Figure 8: Cognition evaluation prompt of reasoning types other than [Event Relationship Reasoning].
|
2025/A Cognitive Evaluation Benchmark of Image Reasoning and Description for Large Vision-Language Models/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:d8b345df5e9afc7e36072b1bbad5f439e1e0f310c51fcaa6b92e6e33c898912d
|
| 3 |
+
size 777482
|
2025/A Cognitive Evaluation Benchmark of Image Reasoning and Description for Large Vision-Language Models/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2025/A Data-Driven Method for Analyzing and Quantifying Lyrics-Dance Motion Relationships/1137dc0d-0b8d-4c1f-bfdb-069a25b57730_content_list.json
ADDED
|
@@ -0,0 +1,1962 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[
|
| 2 |
+
{
|
| 3 |
+
"type": "text",
|
| 4 |
+
"text": "A Data-Driven Method for Analyzing and Quantifying Lyrics-Dance Motion Relationships",
|
| 5 |
+
"text_level": 1,
|
| 6 |
+
"bbox": [
|
| 7 |
+
248,
|
| 8 |
+
90,
|
| 9 |
+
747,
|
| 10 |
+
130
|
| 11 |
+
],
|
| 12 |
+
"page_idx": 0
|
| 13 |
+
},
|
| 14 |
+
{
|
| 15 |
+
"type": "text",
|
| 16 |
+
"text": "Kento Watanabe and Masataka Goto",
|
| 17 |
+
"bbox": [
|
| 18 |
+
331,
|
| 19 |
+
158,
|
| 20 |
+
665,
|
| 21 |
+
172
|
| 22 |
+
],
|
| 23 |
+
"page_idx": 0
|
| 24 |
+
},
|
| 25 |
+
{
|
| 26 |
+
"type": "text",
|
| 27 |
+
"text": "National Institute of Advanced Industrial Science and Technology (AIST)",
|
| 28 |
+
"bbox": [
|
| 29 |
+
200,
|
| 30 |
+
174,
|
| 31 |
+
796,
|
| 32 |
+
190
|
| 33 |
+
],
|
| 34 |
+
"page_idx": 0
|
| 35 |
+
},
|
| 36 |
+
{
|
| 37 |
+
"type": "text",
|
| 38 |
+
"text": "{kento.watanabe, m.goto}@aist.go.jp",
|
| 39 |
+
"bbox": [
|
| 40 |
+
322,
|
| 41 |
+
192,
|
| 42 |
+
677,
|
| 43 |
+
208
|
| 44 |
+
],
|
| 45 |
+
"page_idx": 0
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"type": "text",
|
| 49 |
+
"text": "Abstract",
|
| 50 |
+
"text_level": 1,
|
| 51 |
+
"bbox": [
|
| 52 |
+
260,
|
| 53 |
+
260,
|
| 54 |
+
339,
|
| 55 |
+
275
|
| 56 |
+
],
|
| 57 |
+
"page_idx": 0
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"type": "text",
|
| 61 |
+
"text": "Dancing to music with lyrics is a popular form of expression. While it is generally accepted that there are relationships between lyrics and dance motions, previous studies have not explored these relationships. A major challenge is that the relationships between lyrics and dance motions are not constant throughout a song but are instead localized to specific parts. To address this challenge, we hypothesize that lyrics and dance motions that co-occur across multiple songs are related. Based on this hypothesis, we propose a novel data-driven method to detect the parts of songs where meaningful relationships between lyrics and dance motions exist. We use clustering to transform lyrics and dance motions into symbols, enabling the calculation of co-occurrence frequencies and detection of significant correlations. The effectiveness of our method is validated by a dataset of time-synchronized lyrics and dance motions, which showed high correlation values for emotionally salient lyrics such as \"love\", which is expressed in heart-shaped motions. Furthermore, using our relationship detection method, we propose a method for retrieving dance motions from lyrics that outperforms previous text-to-motion retrieval methods, which focus on prose and non-dance motions.",
|
| 62 |
+
"bbox": [
|
| 63 |
+
144,
|
| 64 |
+
282,
|
| 65 |
+
460,
|
| 66 |
+
680
|
| 67 |
+
],
|
| 68 |
+
"page_idx": 0
|
| 69 |
+
},
|
| 70 |
+
{
|
| 71 |
+
"type": "text",
|
| 72 |
+
"text": "1 Introduction",
|
| 73 |
+
"text_level": 1,
|
| 74 |
+
"bbox": [
|
| 75 |
+
114,
|
| 76 |
+
687,
|
| 77 |
+
260,
|
| 78 |
+
703
|
| 79 |
+
],
|
| 80 |
+
"page_idx": 0
|
| 81 |
+
},
|
| 82 |
+
{
|
| 83 |
+
"type": "text",
|
| 84 |
+
"text": "Platforms like YouTube and TikTok have popularized dance videos, allowing amateur dancers to express their creativity, but choreographing these performances is challenging for those without professional training. Consequently, previous studies have focused on synthesizing or retrieving dance motions that match the rhythms and timbres of music (Zhu et al., 2024). However, choreographic design is influenced by both audio attributes and lyrics, as dancers often interpret lyrics literally, such as by making heart shapes to express \"love\". However, no empirical studies have quantitatively analyzed the relationships between lyrics",
|
| 85 |
+
"bbox": [
|
| 86 |
+
110,
|
| 87 |
+
711,
|
| 88 |
+
490,
|
| 89 |
+
921
|
| 90 |
+
],
|
| 91 |
+
"page_idx": 0
|
| 92 |
+
},
|
| 93 |
+
{
|
| 94 |
+
"type": "text",
|
| 95 |
+
"text": "and dance, and no studies have specifically focused on retrieving dance motions associated with lyrics. Quantitative analysis of lyrics and dance motions offers two main contributions. Academically, this analysis provides clear evidence of how lyrics and dance motions are related. Practically, it enables the development of systems that suggest dance motions based on user-input lyrics, offering choreographic ideas to support dancers, particularly amateurs, in creating or enhancing their choreography.",
|
| 96 |
+
"bbox": [
|
| 97 |
+
507,
|
| 98 |
+
261,
|
| 99 |
+
884,
|
| 100 |
+
420
|
| 101 |
+
],
|
| 102 |
+
"page_idx": 0
|
| 103 |
+
},
|
| 104 |
+
{
|
| 105 |
+
"type": "text",
|
| 106 |
+
"text": "Previous studies have quantified the relationship between prose and corresponding non-dance gestures (e.g., \"A man walks in a quarter circle to the left\") (Yu et al., 2024; Horie et al., 2023; Petrovich et al., 2023; Tevet et al., 2022). These methods assume that text and motion are analyzed in units of sentences and sequences, respectively, and that all text-motion pairs are inherently related. However, these methods fall short for analyzing interactions between lyrics and dance motions, because those interactions are often localized to specific parts of songs rather than uniformly distributed across the entire song. For example, functional words like \"the\" or \"is\" may not correspond to any dance motion, while emotionally charged words like \"love\" might correlate with specific motions. To analyze lyrics and dance motions, we need a method that can detect which frames<sup>1</sup> have motions related to lyrics.",
|
| 107 |
+
"bbox": [
|
| 108 |
+
507,
|
| 109 |
+
423,
|
| 110 |
+
884,
|
| 111 |
+
728
|
| 112 |
+
],
|
| 113 |
+
"page_idx": 0
|
| 114 |
+
},
|
| 115 |
+
{
|
| 116 |
+
"type": "text",
|
| 117 |
+
"text": "This study proposes a novel data-driven method to analyze and quantify the relationship between lyrics and dance motions as illustrated in Figure 1. We hypothesize that lyrics and dance motions that co-occur in different songs are related. For example, if \"jump\" is sung while a jumping motion is performed in multiple songs, the co-occurrence suggests a meaningful relationship. To quantify these relationships, we first transform lyrics and",
|
| 118 |
+
"bbox": [
|
| 119 |
+
507,
|
| 120 |
+
728,
|
| 121 |
+
885,
|
| 122 |
+
873
|
| 123 |
+
],
|
| 124 |
+
"page_idx": 0
|
| 125 |
+
},
|
| 126 |
+
{
|
| 127 |
+
"type": "page_footnote",
|
| 128 |
+
"text": "<sup>1</sup>A 'frame' refers to a single pose within a sequence of dance motions, similar to a frame in a video. Dance motions are recorded at a specific frames-per-second rate.",
|
| 129 |
+
"bbox": [
|
| 130 |
+
507,
|
| 131 |
+
883,
|
| 132 |
+
882,
|
| 133 |
+
921
|
| 134 |
+
],
|
| 135 |
+
"page_idx": 0
|
| 136 |
+
},
|
| 137 |
+
{
|
| 138 |
+
"type": "page_number",
|
| 139 |
+
"text": "7901",
|
| 140 |
+
"bbox": [
|
| 141 |
+
480,
|
| 142 |
+
927,
|
| 143 |
+
519,
|
| 144 |
+
940
|
| 145 |
+
],
|
| 146 |
+
"page_idx": 0
|
| 147 |
+
},
|
| 148 |
+
{
|
| 149 |
+
"type": "footer",
|
| 150 |
+
"text": "Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies",
|
| 151 |
+
"bbox": [
|
| 152 |
+
18,
|
| 153 |
+
945,
|
| 154 |
+
979,
|
| 155 |
+
959
|
| 156 |
+
],
|
| 157 |
+
"page_idx": 0
|
| 158 |
+
},
|
| 159 |
+
{
|
| 160 |
+
"type": "footer",
|
| 161 |
+
"text": "(Volume 1: Long Papers), pages 7901-7916",
|
| 162 |
+
"bbox": [
|
| 163 |
+
364,
|
| 164 |
+
958,
|
| 165 |
+
633,
|
| 166 |
+
971
|
| 167 |
+
],
|
| 168 |
+
"page_idx": 0
|
| 169 |
+
},
|
| 170 |
+
{
|
| 171 |
+
"type": "footer",
|
| 172 |
+
"text": "April 29 - May 4, 2025 ©2025 Association for Computational Linguistics",
|
| 173 |
+
"bbox": [
|
| 174 |
+
273,
|
| 175 |
+
972,
|
| 176 |
+
722,
|
| 177 |
+
985
|
| 178 |
+
],
|
| 179 |
+
"page_idx": 0
|
| 180 |
+
},
|
| 181 |
+
{
|
| 182 |
+
"type": "image",
|
| 183 |
+
"img_path": "images/70bacebc801c640dac4536427b5a876a79734eb603bfa89ab796d7d88595be08.jpg",
|
| 184 |
+
"image_caption": [
|
| 185 |
+
"Figure 1: Overview of the proposed data-driven method."
|
| 186 |
+
],
|
| 187 |
+
"image_footnote": [],
|
| 188 |
+
"bbox": [
|
| 189 |
+
122,
|
| 190 |
+
86,
|
| 191 |
+
872,
|
| 192 |
+
305
|
| 193 |
+
],
|
| 194 |
+
"page_idx": 1
|
| 195 |
+
},
|
| 196 |
+
{
|
| 197 |
+
"type": "text",
|
| 198 |
+
"text": "dance motions of each frame into discrete symbols by using clustering methods. These symbols allow us to calculate their co-occurrence frequency. We then calculate Pointwise Mutual Information (PMI), a measure used in natural language processing (NLP) methods, and use it to evaluate these frequencies. By using PMI we find positive correlations for frequent co-occurrences and negative correlations for independent occurrences, thus detecting specific frames where lyrics and dance motions are related.",
|
| 199 |
+
"bbox": [
|
| 200 |
+
112,
|
| 201 |
+
355,
|
| 202 |
+
489,
|
| 203 |
+
530
|
| 204 |
+
],
|
| 205 |
+
"page_idx": 1
|
| 206 |
+
},
|
| 207 |
+
{
|
| 208 |
+
"type": "text",
|
| 209 |
+
"text": "To investigate the effectiveness of our method, we have prepared a novel dataset of timesynchronized lyrics and dance motions, including the dancer's finger motions. Applying our method to this dataset, we have detected frames with high PMI values between lyrics and dance motions. For example, the lyric \"love\" corresponds to dance motions that form heart shapes with the hands, and the phrase \"getting lost\" is associated with motions that suggest searching for a destination. Our proposed method therefore makes it possible to quantify and analyze specific relationships between lyrics and dance motions, revealing valuable correlations.",
|
| 210 |
+
"bbox": [
|
| 211 |
+
112,
|
| 212 |
+
533,
|
| 213 |
+
489,
|
| 214 |
+
743
|
| 215 |
+
],
|
| 216 |
+
"page_idx": 1
|
| 217 |
+
},
|
| 218 |
+
{
|
| 219 |
+
"type": "text",
|
| 220 |
+
"text": "In addition, we have developed a lyrics-to-dance motion retrieval method by utilizing our method to detect relationships between lyrics and dance motions. This method outperforms an existing text-to-motion retrieval method based on contrastive learning, which assumes that all text-motion pairs are inherently related. Unlike this general method that focuses on text and non-dance motions, our method specifically targets lyrics and dance motions that frequently co-occur across multiple songs, indicating a meaningful relationship. Our method over",
|
| 221 |
+
"bbox": [
|
| 222 |
+
112,
|
| 223 |
+
744,
|
| 224 |
+
489,
|
| 225 |
+
921
|
| 226 |
+
],
|
| 227 |
+
"page_idx": 1
|
| 228 |
+
},
|
| 229 |
+
{
|
| 230 |
+
"type": "text",
|
| 231 |
+
"text": "comes the limitations observed in broader text-to-motion studies and improves retrieval performance, providing an effective solution in this area.",
|
| 232 |
+
"bbox": [
|
| 233 |
+
507,
|
| 234 |
+
355,
|
| 235 |
+
884,
|
| 236 |
+
403
|
| 237 |
+
],
|
| 238 |
+
"page_idx": 1
|
| 239 |
+
},
|
| 240 |
+
{
|
| 241 |
+
"type": "text",
|
| 242 |
+
"text": "2 Related Work",
|
| 243 |
+
"text_level": 1,
|
| 244 |
+
"bbox": [
|
| 245 |
+
509,
|
| 246 |
+
418,
|
| 247 |
+
665,
|
| 248 |
+
432
|
| 249 |
+
],
|
| 250 |
+
"page_idx": 1
|
| 251 |
+
},
|
| 252 |
+
{
|
| 253 |
+
"type": "text",
|
| 254 |
+
"text": "2.1 Audio-to-Dance Motion Synthesis",
|
| 255 |
+
"text_level": 1,
|
| 256 |
+
"bbox": [
|
| 257 |
+
507,
|
| 258 |
+
445,
|
| 259 |
+
818,
|
| 260 |
+
462
|
| 261 |
+
],
|
| 262 |
+
"page_idx": 1
|
| 263 |
+
},
|
| 264 |
+
{
|
| 265 |
+
"type": "text",
|
| 266 |
+
"text": "Previous studies have focused on correlating dance motions with the rhythmic and timbral features of audio signals (Zhu et al., 2024). A notable trend in this area is the use of diffusion models to synthesize dance motions from music audio (Tseng et al., 2023; Dabral et al., 2023; Li et al., 2024; Zhang et al., 2024; Luo et al., 2024; Qi et al., 2023). These models represent dance motion as matrices defined by frame counts $T$ and joint parameters $J$ , and they use architectures such as U-Nets (Ronneberger et al., 2015) or Transformers (Vaswani et al., 2017) to reconstruct these matrices. While promising, synthesis based on diffusion models often results in long computation times and can produce motions plagued by unnatural artifacts such as jitter or sliding feet.",
|
| 267 |
+
"bbox": [
|
| 268 |
+
505,
|
| 269 |
+
468,
|
| 270 |
+
884,
|
| 271 |
+
726
|
| 272 |
+
],
|
| 273 |
+
"page_idx": 1
|
| 274 |
+
},
|
| 275 |
+
{
|
| 276 |
+
"type": "text",
|
| 277 |
+
"text": "An alternative method involves motion graph-based synthesis (Chen et al., 2021; Au et al., 2022; Gao et al., 2022) structured into three phases: segmenting audio into musical bars, retrieving dance motions from a database for each segment, and ensuring natural transitions between sequences. This method, ideal for music with repetitive structures like verse-bridge-chorus, maintains consistency across similar musical sections.",
|
| 278 |
+
"bbox": [
|
| 279 |
+
507,
|
| 280 |
+
727,
|
| 281 |
+
884,
|
| 282 |
+
871
|
| 283 |
+
],
|
| 284 |
+
"page_idx": 1
|
| 285 |
+
},
|
| 286 |
+
{
|
| 287 |
+
"type": "text",
|
| 288 |
+
"text": "Unlike these studies, we do not focus on synthesizing dance motions from audio signals. Instead, we present a novel method that quantifies the rela",
|
| 289 |
+
"bbox": [
|
| 290 |
+
507,
|
| 291 |
+
873,
|
| 292 |
+
884,
|
| 293 |
+
921
|
| 294 |
+
],
|
| 295 |
+
"page_idx": 1
|
| 296 |
+
},
|
| 297 |
+
{
|
| 298 |
+
"type": "page_number",
|
| 299 |
+
"text": "7902",
|
| 300 |
+
"bbox": [
|
| 301 |
+
480,
|
| 302 |
+
927,
|
| 303 |
+
519,
|
| 304 |
+
940
|
| 305 |
+
],
|
| 306 |
+
"page_idx": 1
|
| 307 |
+
},
|
| 308 |
+
{
|
| 309 |
+
"type": "text",
|
| 310 |
+
"text": "tionship between lyrics and specific dance motions, offering a unique approach within the field of dance motion analysis. Our method enriches the understanding of how lyrics influence and correspond to dance, and it provides findings that could improve future methods of dance motion synthesis and retrieval by ensuring that dance motions are more closely aligned with lyrical content.",
|
| 311 |
+
"bbox": [
|
| 312 |
+
112,
|
| 313 |
+
84,
|
| 314 |
+
489,
|
| 315 |
+
212
|
| 316 |
+
],
|
| 317 |
+
"page_idx": 2
|
| 318 |
+
},
|
| 319 |
+
{
|
| 320 |
+
"type": "text",
|
| 321 |
+
"text": "2.2 Text and Motion Relationships",
|
| 322 |
+
"text_level": 1,
|
| 323 |
+
"bbox": [
|
| 324 |
+
112,
|
| 325 |
+
223,
|
| 326 |
+
401,
|
| 327 |
+
237
|
| 328 |
+
],
|
| 329 |
+
"page_idx": 2
|
| 330 |
+
},
|
| 331 |
+
{
|
| 332 |
+
"type": "text",
|
| 333 |
+
"text": "The relationship between prose text descriptions and corresponding non-dance motion has attracted considerable interest. Recent advances have used contrastive learning to embed text and motion feature vectors in a shared vector space, effectively aligning related text and motion vectors to improve the accuracy of motion vector retrieval (Yu et al., 2024; Horie et al., 2023; Petrovich et al., 2023; Tevet et al., 2022). In addition, the integration of large language models has extended the capabilities of text-motion analysis (Jiang et al., 2023). A particularly innovative method involves the use of vector quantized-variational auto-encoders to transform motion data into codebooks, which are then treated as pseudowords. These pseudowords are integrated with text to train language models, supporting a range of applications from text-to-motion synthesis to motion description generation and predictive modeling.",
|
| 334 |
+
"bbox": [
|
| 335 |
+
115,
|
| 336 |
+
243,
|
| 337 |
+
489,
|
| 338 |
+
548
|
| 339 |
+
],
|
| 340 |
+
"page_idx": 2
|
| 341 |
+
},
|
| 342 |
+
{
|
| 343 |
+
"type": "text",
|
| 344 |
+
"text": "However, many existing methods assume a uniform relationship across all text-motion pairs, an assumption that may not reflect the complexity of lyrics-dance interactions. Contrary to the commonly assumed uniformity, lyrics-dance relationships may appear selectively, becoming prominent only in particular contexts, such as when emotionally charged words like \"love\" inspire certain dance motions. Our study examines how lyrics influence dance in different situations, allowing us to detail the multifaceted nature of these interactions.",
|
| 345 |
+
"bbox": [
|
| 346 |
+
112,
|
| 347 |
+
550,
|
| 348 |
+
489,
|
| 349 |
+
725
|
| 350 |
+
],
|
| 351 |
+
"page_idx": 2
|
| 352 |
+
},
|
| 353 |
+
{
|
| 354 |
+
"type": "text",
|
| 355 |
+
"text": "3 Time-SynchronizedLyrics and Dance Motion Pair Data",
|
| 356 |
+
"text_level": 1,
|
| 357 |
+
"bbox": [
|
| 358 |
+
112,
|
| 359 |
+
736,
|
| 360 |
+
472,
|
| 361 |
+
769
|
| 362 |
+
],
|
| 363 |
+
"page_idx": 2
|
| 364 |
+
},
|
| 365 |
+
{
|
| 366 |
+
"type": "text",
|
| 367 |
+
"text": "Our goal is to analyze and quantify the relationship between lyrics and dance motions. To achieve this, we need data where each frame of dance motion is associated with specific words or sentences from the corresponding lyrics.",
|
| 368 |
+
"bbox": [
|
| 369 |
+
112,
|
| 370 |
+
778,
|
| 371 |
+
487,
|
| 372 |
+
858
|
| 373 |
+
],
|
| 374 |
+
"page_idx": 2
|
| 375 |
+
},
|
| 376 |
+
{
|
| 377 |
+
"type": "text",
|
| 378 |
+
"text": "3.1 Data Collection",
|
| 379 |
+
"text_level": 1,
|
| 380 |
+
"bbox": [
|
| 381 |
+
112,
|
| 382 |
+
869,
|
| 383 |
+
284,
|
| 384 |
+
883
|
| 385 |
+
],
|
| 386 |
+
"page_idx": 2
|
| 387 |
+
},
|
| 388 |
+
{
|
| 389 |
+
"type": "text",
|
| 390 |
+
"text": "We collected 1,000 dance motion datasets (totaling 55.3 hours) from the MikuMikuDance community,",
|
| 391 |
+
"bbox": [
|
| 392 |
+
112,
|
| 393 |
+
889,
|
| 394 |
+
487,
|
| 395 |
+
921
|
| 396 |
+
],
|
| 397 |
+
"page_idx": 2
|
| 398 |
+
},
|
| 399 |
+
{
|
| 400 |
+
"type": "text",
|
| 401 |
+
"text": "where creators manually trace dance motions from dance videos on platforms such as YouTube and NicoNico or use motion capture technologies to create dance sequences. Our collection includes 979 traced motions and 21 captured motions. These dance motions correspond to 868 unique songs, as some songs have multiple associated dance motions.",
|
| 402 |
+
"bbox": [
|
| 403 |
+
507,
|
| 404 |
+
84,
|
| 405 |
+
884,
|
| 406 |
+
211
|
| 407 |
+
],
|
| 408 |
+
"page_idx": 2
|
| 409 |
+
},
|
| 410 |
+
{
|
| 411 |
+
"type": "text",
|
| 412 |
+
"text": "We obtained the corresponding audio and lyrics for these songs from various online resources. Most of the lyrics are in Japanese, with some in English. To ensure synchronization between the dance motions and the audio, we manually aligned their start times and annotated the start and end times of each lyric sentence during audio playback. Word-level timing annotation, which is labor-intensive, was refined using an automatic synchronization method (Nakano and Goto, 2016) that allows precise alignment of individual words. As a result, each frame of the motion data, recorded at 30 frames per second, is associated with specific words and sentences. Frames without corresponding lyrics were assigned a padding token, represented as [PAD], to maintain sequence consistency. As these choreographies and lyrics are copyrighted, we do not plan to make the collected data publicly available. However, the code for training the proposed model and conducting the evaluation is available at https://github.com/KentoW/lyrics-and-dance.",
|
| 413 |
+
"bbox": [
|
| 414 |
+
507,
|
| 415 |
+
214,
|
| 416 |
+
884,
|
| 417 |
+
565
|
| 418 |
+
],
|
| 419 |
+
"page_idx": 2
|
| 420 |
+
},
|
| 421 |
+
{
|
| 422 |
+
"type": "text",
|
| 423 |
+
"text": "3.2 Dance Motion Data Pre-processing",
|
| 424 |
+
"text_level": 1,
|
| 425 |
+
"bbox": [
|
| 426 |
+
507,
|
| 427 |
+
577,
|
| 428 |
+
830,
|
| 429 |
+
594
|
| 430 |
+
],
|
| 431 |
+
"page_idx": 2
|
| 432 |
+
},
|
| 433 |
+
{
|
| 434 |
+
"type": "text",
|
| 435 |
+
"text": "The human skeletal model in our dataset comprises 53 joints, each represented by global coordinates (x, y, z) and Euler angles (roll, pitch, yaw). See Appendix B for details on the structure of the human skeletal model. To avoid issues like gimbal lock, we convert these angles into a six-dimensional format using sine and cosine transformations: sin(roll), cos(roll), sin(pitch), cos(pitch), sin(yaw), and cos(yaw). Preliminary tests showed this transformation method to be more effective than using quaternions.",
|
| 436 |
+
"bbox": [
|
| 437 |
+
507,
|
| 438 |
+
599,
|
| 439 |
+
882,
|
| 440 |
+
775
|
| 441 |
+
],
|
| 442 |
+
"page_idx": 2
|
| 443 |
+
},
|
| 444 |
+
{
|
| 445 |
+
"type": "text",
|
| 446 |
+
"text": "In this study, we define the unit of analysis for dance motions as bars for easier analysis. Using the downbeat tracking method (Böck et al., 2016), we segmented the dance motion and audio data into bars, discarding any bars shorter than one second. This resulted in a dataset of 119,691 bars: 92,723 with lyrics and 26,968 without.",
|
| 447 |
+
"bbox": [
|
| 448 |
+
507,
|
| 449 |
+
776,
|
| 450 |
+
882,
|
| 451 |
+
888
|
| 452 |
+
],
|
| 453 |
+
"page_idx": 2
|
| 454 |
+
},
|
| 455 |
+
{
|
| 456 |
+
"type": "text",
|
| 457 |
+
"text": "To ensure uniform spatial positioning for consistent analysis, we adjusted the y axis to set the",
|
| 458 |
+
"bbox": [
|
| 459 |
+
507,
|
| 460 |
+
889,
|
| 461 |
+
882,
|
| 462 |
+
921
|
| 463 |
+
],
|
| 464 |
+
"page_idx": 2
|
| 465 |
+
},
|
| 466 |
+
{
|
| 467 |
+
"type": "page_number",
|
| 468 |
+
"text": "7903",
|
| 469 |
+
"bbox": [
|
| 470 |
+
480,
|
| 471 |
+
927,
|
| 472 |
+
519,
|
| 473 |
+
940
|
| 474 |
+
],
|
| 475 |
+
"page_idx": 2
|
| 476 |
+
},
|
| 477 |
+
{
|
| 478 |
+
"type": "text",
|
| 479 |
+
"text": "minimum coordinate of the \"toe\" joints to zero and adjusted the x and z axes to align the human model's average position with the origin for each bar. We calculated velocities and accelerations for each joint based on its positional and angular data, including local xyz directional velocities and accelerations in six dimensions and point-to-point global velocities and accelerations in two dimensions. We derived the first and second derivatives of the six-dimensional angular parameters, integrating twelve additional dimensions. As a result, each joint frame contained 29-dimensional parameters, providing a detailed framework for comprehensive motion analysis. Finally, we normalized the 29-dimensional parameters to ensure that they ranged between $-1$ and $1$ .",
|
| 480 |
+
"bbox": [
|
| 481 |
+
110,
|
| 482 |
+
84,
|
| 483 |
+
492,
|
| 484 |
+
341
|
| 485 |
+
],
|
| 486 |
+
"page_idx": 3
|
| 487 |
+
},
|
| 488 |
+
{
|
| 489 |
+
"type": "text",
|
| 490 |
+
"text": "4 Analyzing Lyrics and Dance Motion Relationships",
|
| 491 |
+
"text_level": 1,
|
| 492 |
+
"bbox": [
|
| 493 |
+
112,
|
| 494 |
+
353,
|
| 495 |
+
460,
|
| 496 |
+
387
|
| 497 |
+
],
|
| 498 |
+
"page_idx": 3
|
| 499 |
+
},
|
| 500 |
+
{
|
| 501 |
+
"type": "text",
|
| 502 |
+
"text": "In this section we present a method for detecting frames that demonstrate meaningful relationships between lyrics and dance motions within our time-synchronized data. Based on our hypothesis that the frequent co-occurrence of lyrics and dance motions across multiple songs indicates meaningful relationships, we use co-occurrence frequencies to quantitatively analyze these interactions.",
|
| 503 |
+
"bbox": [
|
| 504 |
+
112,
|
| 505 |
+
394,
|
| 506 |
+
489,
|
| 507 |
+
521
|
| 508 |
+
],
|
| 509 |
+
"page_idx": 3
|
| 510 |
+
},
|
| 511 |
+
{
|
| 512 |
+
"type": "text",
|
| 513 |
+
"text": "To analyze the co-occurrence relationships at each frame, we utilize PMI, a metric used in NLP. PMI is calculated with the formula",
|
| 514 |
+
"bbox": [
|
| 515 |
+
112,
|
| 516 |
+
523,
|
| 517 |
+
489,
|
| 518 |
+
570
|
| 519 |
+
],
|
| 520 |
+
"page_idx": 3
|
| 521 |
+
},
|
| 522 |
+
{
|
| 523 |
+
"type": "equation",
|
| 524 |
+
"text": "\n$$\n\\operatorname {P M I} (x, y) = \\log \\frac {P (x , y)}{P (x) P (y)} \\tag {1}\n$$\n",
|
| 525 |
+
"text_format": "latex",
|
| 526 |
+
"bbox": [
|
| 527 |
+
191,
|
| 528 |
+
575,
|
| 529 |
+
487,
|
| 530 |
+
609
|
| 531 |
+
],
|
| 532 |
+
"page_idx": 3
|
| 533 |
+
},
|
| 534 |
+
{
|
| 535 |
+
"type": "text",
|
| 536 |
+
"text": "where $x$ and $y$ respectively represent the lyrics and dance motions at a specific frame. $P(x, y)$ is the probability of their co-occurrence within the same frame, and $P(x)$ and $P(y)$ are the probabilities of observing $x$ and $y$ independently at any frame. High PMI values at a frame indicate a strong association between the lyrics and dance motions at that particular moment.",
|
| 537 |
+
"bbox": [
|
| 538 |
+
112,
|
| 539 |
+
615,
|
| 540 |
+
489,
|
| 541 |
+
743
|
| 542 |
+
],
|
| 543 |
+
"page_idx": 3
|
| 544 |
+
},
|
| 545 |
+
{
|
| 546 |
+
"type": "text",
|
| 547 |
+
"text": "To apply PMI, we convert lyrics and dance motions into discrete symbols $x$ and $y$ for each frame. This involves transforming feature vectors, derived from deep learning models, into a form suitable for PMI analysis. Using a clustering method, we categorize similar motions and lyrical expressions into codebooks $x$ and $y$ , which serve as the basis for calculating PMI, enabling us to detect frames where lyrics and dance motions are closely related.",
|
| 548 |
+
"bbox": [
|
| 549 |
+
112,
|
| 550 |
+
744,
|
| 551 |
+
489,
|
| 552 |
+
888
|
| 553 |
+
],
|
| 554 |
+
"page_idx": 3
|
| 555 |
+
},
|
| 556 |
+
{
|
| 557 |
+
"type": "text",
|
| 558 |
+
"text": "The process begins with extracting feature vectors for lyrics and dance motions by using deep",
|
| 559 |
+
"bbox": [
|
| 560 |
+
112,
|
| 561 |
+
889,
|
| 562 |
+
489,
|
| 563 |
+
921
|
| 564 |
+
],
|
| 565 |
+
"page_idx": 3
|
| 566 |
+
},
|
| 567 |
+
{
|
| 568 |
+
"type": "text",
|
| 569 |
+
"text": "learning models. We then implement clustering to group similar lyrics and dance motions, facilitating the PMI calculations for our analysis.",
|
| 570 |
+
"bbox": [
|
| 571 |
+
507,
|
| 572 |
+
84,
|
| 573 |
+
882,
|
| 574 |
+
131
|
| 575 |
+
],
|
| 576 |
+
"page_idx": 3
|
| 577 |
+
},
|
| 578 |
+
{
|
| 579 |
+
"type": "text",
|
| 580 |
+
"text": "4.1 **Lyrics Feature Extraction**",
|
| 581 |
+
"text_level": 1,
|
| 582 |
+
"bbox": [
|
| 583 |
+
507,
|
| 584 |
+
143,
|
| 585 |
+
759,
|
| 586 |
+
158
|
| 587 |
+
],
|
| 588 |
+
"page_idx": 3
|
| 589 |
+
},
|
| 590 |
+
{
|
| 591 |
+
"type": "text",
|
| 592 |
+
"text": "Before clustering lyrics, we calculate feature vectors using a language model, either at the word level or the sentence level. This distinction is crucial, as the relationship between lyrics and dance motions can vary. For example, \"jump\" may correspond to a specific motion, while \"I feel free like a bird in the sky\" can inspire broader, fluid motions, capturing the overall feeling.",
|
| 593 |
+
"bbox": [
|
| 594 |
+
507,
|
| 595 |
+
164,
|
| 596 |
+
884,
|
| 597 |
+
291
|
| 598 |
+
],
|
| 599 |
+
"page_idx": 3
|
| 600 |
+
},
|
| 601 |
+
{
|
| 602 |
+
"type": "text",
|
| 603 |
+
"text": "To explore these relationships, we use a pre-trained multilingual Sentence-BERT model $^2$ (Reimers and Gurevych, 2019) to generate both word-level and sentence-level feature vectors. This method allows us to analyze how individual words and broader thematic content influence corresponding dance motions.",
|
| 604 |
+
"bbox": [
|
| 605 |
+
507,
|
| 606 |
+
293,
|
| 607 |
+
884,
|
| 608 |
+
405
|
| 609 |
+
],
|
| 610 |
+
"page_idx": 3
|
| 611 |
+
},
|
| 612 |
+
{
|
| 613 |
+
"type": "text",
|
| 614 |
+
"text": "4.2 Dance Motion Feature Extraction",
|
| 615 |
+
"text_level": 1,
|
| 616 |
+
"bbox": [
|
| 617 |
+
507,
|
| 618 |
+
416,
|
| 619 |
+
820,
|
| 620 |
+
430
|
| 621 |
+
],
|
| 622 |
+
"page_idx": 3
|
| 623 |
+
},
|
| 624 |
+
{
|
| 625 |
+
"type": "text",
|
| 626 |
+
"text": "We developed a novel dance motion encoder to extract features from our dataset, which includes detailed elements like finger joint configurations and variable sequence lengths. This encoder pre-processes both the physical and expressive components of dance, providing features for clustering.",
|
| 627 |
+
"bbox": [
|
| 628 |
+
507,
|
| 629 |
+
436,
|
| 630 |
+
882,
|
| 631 |
+
532
|
| 632 |
+
],
|
| 633 |
+
"page_idx": 3
|
| 634 |
+
},
|
| 635 |
+
{
|
| 636 |
+
"type": "text",
|
| 637 |
+
"text": "The encoder processes two data types per frame: motion sequences and affective features. Each frame's motion data, denoted $S_{t}$ , is a matrix with dimensions $J \\times P$ , where $J$ represents the number of joints and $P$ represents the number of parameters per joint. These matrices, $S_{1}, \\ldots, S_{t}, \\ldots, S_{T}$ , cover $T$ frames within a musical bar, capturing detailed motion across the sequence.",
|
| 638 |
+
"bbox": [
|
| 639 |
+
507,
|
| 640 |
+
533,
|
| 641 |
+
882,
|
| 642 |
+
661
|
| 643 |
+
],
|
| 644 |
+
"page_idx": 3
|
| 645 |
+
},
|
| 646 |
+
{
|
| 647 |
+
"type": "text",
|
| 648 |
+
"text": "Additionally, we include affective features for each frame, denoted as $\\pmb{a}_t$ . These vectors, derived from geometric properties like volume, area, length, and curvature between joints, represent the expressive qualities of dance. The sequence of these features across a bar is denoted as $\\pmb{a}_1, \\dots, \\pmb{a}_t, \\dots, \\pmb{a}_T$ . Affective features recognized in human motion analysis (Kleinsmith and Bianchi-Berthouze, 2013; Crenn et al., 2016; Bhattacharya et al., 2020a,b, 2021) help capture the expressive aspects of dance, enriching our encoder's data input. By combining motion matrices and affective features, our encoder processes inputs that reflect both the physical execution and expressive dynamics of dance.",
|
| 649 |
+
"bbox": [
|
| 650 |
+
507,
|
| 651 |
+
662,
|
| 652 |
+
884,
|
| 653 |
+
887
|
| 654 |
+
],
|
| 655 |
+
"page_idx": 3
|
| 656 |
+
},
|
| 657 |
+
{
|
| 658 |
+
"type": "page_footnote",
|
| 659 |
+
"text": "$^{2}$ https://huggingface.co/sentence-transformers/ paraphrase-multilingual-MiniLM-L12-v2",
|
| 660 |
+
"bbox": [
|
| 661 |
+
507,
|
| 662 |
+
894,
|
| 663 |
+
882,
|
| 664 |
+
921
|
| 665 |
+
],
|
| 666 |
+
"page_idx": 3
|
| 667 |
+
},
|
| 668 |
+
{
|
| 669 |
+
"type": "page_number",
|
| 670 |
+
"text": "7904",
|
| 671 |
+
"bbox": [
|
| 672 |
+
480,
|
| 673 |
+
927,
|
| 674 |
+
519,
|
| 675 |
+
940
|
| 676 |
+
],
|
| 677 |
+
"page_idx": 3
|
| 678 |
+
},
|
| 679 |
+
{
|
| 680 |
+
"type": "image",
|
| 681 |
+
"img_path": "images/d894c62bfad9d10ea61500e74df3418d172a91a4bfd3c266206d0c89ca79f7b0.jpg",
|
| 682 |
+
"image_caption": [
|
| 683 |
+
"Figure 2: Overview of the proposed dance motion encoder and auto-encoder."
|
| 684 |
+
],
|
| 685 |
+
"image_footnote": [],
|
| 686 |
+
"bbox": [
|
| 687 |
+
122,
|
| 688 |
+
84,
|
| 689 |
+
884,
|
| 690 |
+
406
|
| 691 |
+
],
|
| 692 |
+
"page_idx": 4
|
| 693 |
+
},
|
| 694 |
+
{
|
| 695 |
+
"type": "text",
|
| 696 |
+
"text": "4.2.1 Affective Feature",
|
| 697 |
+
"text_level": 1,
|
| 698 |
+
"bbox": [
|
| 699 |
+
112,
|
| 700 |
+
461,
|
| 701 |
+
309,
|
| 702 |
+
475
|
| 703 |
+
],
|
| 704 |
+
"page_idx": 4
|
| 705 |
+
},
|
| 706 |
+
{
|
| 707 |
+
"type": "text",
|
| 708 |
+
"text": "We use 40 affective features, including the area of triangles formed by major joints, volume, distances between key joints, and limb curvature. These features provide a multidimensional view of the dancer's expressive state, revealing various emotional states. For example, a smaller area may suggest a reserved or tense posture, while a larger area implies openness and relaxation. See Appendix B for more details about the 40 affective features.",
|
| 709 |
+
"bbox": [
|
| 710 |
+
112,
|
| 711 |
+
483,
|
| 712 |
+
487,
|
| 713 |
+
626
|
| 714 |
+
],
|
| 715 |
+
"page_idx": 4
|
| 716 |
+
},
|
| 717 |
+
{
|
| 718 |
+
"type": "text",
|
| 719 |
+
"text": "For each frame of motion data, we calculate these features along with their velocity and acceleration, forming a 120-dimensional affective feature vector. This method captures both the static posture and the dynamics of movement, which are crucial for understanding the fluidity and intensity of dance motions. We apply min-max normalization to these vectors, ensuring all features range from 0 to 1. This standardization allows for consistent and meaningful comparisons across different dance sequences.",
|
| 720 |
+
"bbox": [
|
| 721 |
+
112,
|
| 722 |
+
629,
|
| 723 |
+
489,
|
| 724 |
+
804
|
| 725 |
+
],
|
| 726 |
+
"page_idx": 4
|
| 727 |
+
},
|
| 728 |
+
{
|
| 729 |
+
"type": "text",
|
| 730 |
+
"text": "4.2.2 Dance Motion Encoder Architecture",
|
| 731 |
+
"text_level": 1,
|
| 732 |
+
"bbox": [
|
| 733 |
+
112,
|
| 734 |
+
819,
|
| 735 |
+
460,
|
| 736 |
+
834
|
| 737 |
+
],
|
| 738 |
+
"page_idx": 4
|
| 739 |
+
},
|
| 740 |
+
{
|
| 741 |
+
"type": "text",
|
| 742 |
+
"text": "Our dance motion encoder consists of a skeletal encoder and a temporal encoder, as shown on the left side of Figure 2. The process starts with skeletal data for each frame $S_{t}$ . Each $P$ -dimensional joint vector in $S_{t}$ is transformed by",
|
| 743 |
+
"bbox": [
|
| 744 |
+
112,
|
| 745 |
+
841,
|
| 746 |
+
489,
|
| 747 |
+
921
|
| 748 |
+
],
|
| 749 |
+
"page_idx": 4
|
| 750 |
+
},
|
| 751 |
+
{
|
| 752 |
+
"type": "text",
|
| 753 |
+
"text": "a specific Multi-Layer Perceptron (MLP) for each joint. These transformed vectors are fed into a Graph Transformer, which uses Laplacian positional embeddings to maintain the relative positions of the joints (Rao and Miao, 2023). The output is a $J \\times D$ -dimensional vector for the joints, which is flattened and compressed through a linear layer to produce a single $D$ -dimensional vector representing the skeletal features for each frame, denoted as $\\boldsymbol{u}_t$ . This vector captures the comprehensive skeletal structure of the dance motions at each frame.",
|
| 754 |
+
"bbox": [
|
| 755 |
+
507,
|
| 756 |
+
461,
|
| 757 |
+
884,
|
| 758 |
+
638
|
| 759 |
+
],
|
| 760 |
+
"page_idx": 4
|
| 761 |
+
},
|
| 762 |
+
{
|
| 763 |
+
"type": "text",
|
| 764 |
+
"text": "Simultaneously, affective features $(\\pmb{a}_t)$ are calculated for each frame, capturing emotional dynamics through metrics like joint areas and volumes. Before inputting into the temporal encoder, skeletal feature vectors $(\\pmb{u}_t)$ and affective features $(\\pmb{a}_t)$ are concatenated, forming a combined $D + 120$ dimensional vector for each frame. This vector is then compressed through an MLP to ensure uniform $D$ -dimensional vector consistency across all frames. The compressed vectors are fed into a Transformer encoder without positional embeddings (Haviv et al., 2022), emphasizing intrinsic interactions across frames rather than chronological order. This method generates a sequence of motion feature vectors $\\left(\\left[\\pmb{v}_{1},\\dots,\\pmb{v}_{t},\\dots,\\pmb{v}_{T}\\right]\\right)$ , which comprehensively represent both the skeletal structure and emotional dynamics of the dance.",
|
| 765 |
+
"bbox": [
|
| 766 |
+
507,
|
| 767 |
+
646,
|
| 768 |
+
884,
|
| 769 |
+
921
|
| 770 |
+
],
|
| 771 |
+
"page_idx": 4
|
| 772 |
+
},
|
| 773 |
+
{
|
| 774 |
+
"type": "page_number",
|
| 775 |
+
"text": "7905",
|
| 776 |
+
"bbox": [
|
| 777 |
+
480,
|
| 778 |
+
928,
|
| 779 |
+
519,
|
| 780 |
+
940
|
| 781 |
+
],
|
| 782 |
+
"page_idx": 4
|
| 783 |
+
},
|
| 784 |
+
{
|
| 785 |
+
"type": "text",
|
| 786 |
+
"text": "To train our dance motion encoder, we use an auto-encoder method suitable for our dataset that lacks gold labels. The dance motion decoder, shown on the right side of Figure 2, is designed as a reverse architecture of the encoder. Both the encoder and the decoder are trained by minimizing the Mean Squared Error (MSE) between the inputs and reconstructed outputs. The training process reconstructs skeletal matrices $\\hat{S}_t$ with tanh activation and affective features $\\hat{\\boldsymbol{a}}_t$ with sigmoid activation, ensuring the output matches the input:",
|
| 787 |
+
"bbox": [
|
| 788 |
+
112,
|
| 789 |
+
84,
|
| 790 |
+
489,
|
| 791 |
+
261
|
| 792 |
+
],
|
| 793 |
+
"page_idx": 5
|
| 794 |
+
},
|
| 795 |
+
{
|
| 796 |
+
"type": "equation",
|
| 797 |
+
"text": "\n$$\n\\begin{array}{l} L o s s = \\sum_ {t = 1} ^ {T} \\left(\\operatorname {M S E} \\left(S _ {t}, \\tanh \\left(\\hat {S} _ {t}\\right)\\right) \\right. \\tag {2} \\\\ \\left. + \\operatorname {M S E} \\left(\\boldsymbol {a} _ {t}, \\operatorname {s i g m o i d} \\left(\\hat {\\boldsymbol {a}} _ {t}\\right)\\right)\\right). \\\\ \\end{array}\n$$\n",
|
| 798 |
+
"text_format": "latex",
|
| 799 |
+
"bbox": [
|
| 800 |
+
129,
|
| 801 |
+
267,
|
| 802 |
+
487,
|
| 803 |
+
338
|
| 804 |
+
],
|
| 805 |
+
"page_idx": 5
|
| 806 |
+
},
|
| 807 |
+
{
|
| 808 |
+
"type": "text",
|
| 809 |
+
"text": "We train our auto-encoder on the full dataset of 119,691 bars using the AdamW optimizer (Loshchilov and Hutter, 2019), with a mini-batch size of 8 over 200 epochs. Our skeletal encoder processes data from 53 joints ( $J = 53$ ) and 29 motion parameters per joint ( $P = 29$ ). The encoder and decoder, with a dimensionality of $D = 256$ , use 4 multi-heads and 2 layers for skeletal processing, and 8 multi-heads and 4 layers for temporal processing.",
|
| 810 |
+
"bbox": [
|
| 811 |
+
112,
|
| 812 |
+
343,
|
| 813 |
+
489,
|
| 814 |
+
505
|
| 815 |
+
],
|
| 816 |
+
"page_idx": 5
|
| 817 |
+
},
|
| 818 |
+
{
|
| 819 |
+
"type": "text",
|
| 820 |
+
"text": "4.3 *Lyrics and Dance Motion Clustering*",
|
| 821 |
+
"text_level": 1,
|
| 822 |
+
"bbox": [
|
| 823 |
+
112,
|
| 824 |
+
514,
|
| 825 |
+
448,
|
| 826 |
+
530
|
| 827 |
+
],
|
| 828 |
+
"page_idx": 5
|
| 829 |
+
},
|
| 830 |
+
{
|
| 831 |
+
"type": "text",
|
| 832 |
+
"text": "To confirm that lyrics and dance motions that co-occur in multiple songs indicate a meaningful relationship, it is crucial to preprocess the data before applying k-means clustering. This is because repetitive elements within a single song could skew our analysis. For example, if a word like \"flower\" is associated with a particular dance motion multiple times within only a single song, their association does not necessarily indicate a meaningful relationship. If the word and motion co-occur in multiple songs, however, their co-occurrence is likely to be meaningful.",
|
| 833 |
+
"bbox": [
|
| 834 |
+
112,
|
| 835 |
+
535,
|
| 836 |
+
489,
|
| 837 |
+
727
|
| 838 |
+
],
|
| 839 |
+
"page_idx": 5
|
| 840 |
+
},
|
| 841 |
+
{
|
| 842 |
+
"type": "text",
|
| 843 |
+
"text": "To avoid clustering bias caused by duplicate entries within our dataset, we preprocess both lyrics and dance motions before clustering. Specifically, we identify and unify duplicate sentences within the lyrics to ensure that each unique sentence is represented only once in the clustering process. This prevents the formation of clusters dominated by repeated sentences. Similarly, for dance motions, we treat vectors with a cosine similarity of 0.99 or higher as duplicates and unify them. These preprocessing steps enable more accurate clustering by ensuring both lyrics and dance motions are",
|
| 844 |
+
"bbox": [
|
| 845 |
+
112,
|
| 846 |
+
728,
|
| 847 |
+
489,
|
| 848 |
+
921
|
| 849 |
+
],
|
| 850 |
+
"page_idx": 5
|
| 851 |
+
},
|
| 852 |
+
{
|
| 853 |
+
"type": "text",
|
| 854 |
+
"text": "represented by distinct, non-redundant data points. With the data cleaned, we transform word vectors into the codebook $x_{w}$ and sentence vectors into the codebook $x_{s}$ for each frame. Similarly, dance motion vectors for each frame are transformed into the codebook $y$ .",
|
| 855 |
+
"bbox": [
|
| 856 |
+
507,
|
| 857 |
+
84,
|
| 858 |
+
884,
|
| 859 |
+
181
|
| 860 |
+
],
|
| 861 |
+
"page_idx": 5
|
| 862 |
+
},
|
| 863 |
+
{
|
| 864 |
+
"type": "text",
|
| 865 |
+
"text": "4.4 PMI Calculation",
|
| 866 |
+
"text_level": 1,
|
| 867 |
+
"bbox": [
|
| 868 |
+
507,
|
| 869 |
+
192,
|
| 870 |
+
687,
|
| 871 |
+
206
|
| 872 |
+
],
|
| 873 |
+
"page_idx": 5
|
| 874 |
+
},
|
| 875 |
+
{
|
| 876 |
+
"type": "text",
|
| 877 |
+
"text": "To calculate PMI, we adjust the standard approach to fit our hypothesis that lyrics and dance motions that co-occur in multiple songs are indeed related. We calculate the probabilities based on the number of songs in which lyrics and dance motions co-occur by using the following formulas:",
|
| 878 |
+
"bbox": [
|
| 879 |
+
507,
|
| 880 |
+
212,
|
| 881 |
+
884,
|
| 882 |
+
307
|
| 883 |
+
],
|
| 884 |
+
"page_idx": 5
|
| 885 |
+
},
|
| 886 |
+
{
|
| 887 |
+
"type": "equation",
|
| 888 |
+
"text": "\n$$\nP (x, y) = \\frac {\\# (x , y)}{\\text {t o t a l n u m b e r o f s o n g s}}, \\tag {3}\n$$\n",
|
| 889 |
+
"text_format": "latex",
|
| 890 |
+
"bbox": [
|
| 891 |
+
561,
|
| 892 |
+
318,
|
| 893 |
+
882,
|
| 894 |
+
351
|
| 895 |
+
],
|
| 896 |
+
"page_idx": 5
|
| 897 |
+
},
|
| 898 |
+
{
|
| 899 |
+
"type": "equation",
|
| 900 |
+
"text": "\n$$\nP (x) = \\frac {\\# (x)}{\\text {t o t a l n u m b e r o f s o n g s}}, \\tag {4}\n$$\n",
|
| 901 |
+
"text_format": "latex",
|
| 902 |
+
"bbox": [
|
| 903 |
+
571,
|
| 904 |
+
362,
|
| 905 |
+
882,
|
| 906 |
+
395
|
| 907 |
+
],
|
| 908 |
+
"page_idx": 5
|
| 909 |
+
},
|
| 910 |
+
{
|
| 911 |
+
"type": "equation",
|
| 912 |
+
"text": "\n$$\nP (y) = \\frac {\\# (y)}{\\text {t o t a l n u m b e r o f s o n g s}}. \\tag {5}\n$$\n",
|
| 913 |
+
"text_format": "latex",
|
| 914 |
+
"bbox": [
|
| 915 |
+
571,
|
| 916 |
+
401,
|
| 917 |
+
882,
|
| 918 |
+
434
|
| 919 |
+
],
|
| 920 |
+
"page_idx": 5
|
| 921 |
+
},
|
| 922 |
+
{
|
| 923 |
+
"type": "text",
|
| 924 |
+
"text": "Here $\\# (x,y)$ indicates the number of songs where $x$ and $y$ co-occur within the same frame, while $\\# (x)$ and $\\# (y)$ indicate the numbers of songs where $x$ and $y$ appear independently. Equations (3), (4) and (5) are then used to calculate PMI as defined in Equation (1), emphasizing the importance of frequent co-occurrences in different songs. Additionally, to avoid bias from single occurrences $(\\# (x,y) = 1)$ , which can misleadingly suggest strong relationships, we set PMI for these cases to zero. For clearer interpretation, we use Normalized PMI (NPMI) by normalizing PMI values to a scale between $-1$ and $1$ .",
|
| 925 |
+
"bbox": [
|
| 926 |
+
507,
|
| 927 |
+
439,
|
| 928 |
+
882,
|
| 929 |
+
646
|
| 930 |
+
],
|
| 931 |
+
"page_idx": 5
|
| 932 |
+
},
|
| 933 |
+
{
|
| 934 |
+
"type": "text",
|
| 935 |
+
"text": "4.5 Analysis Setup",
|
| 936 |
+
"text_level": 1,
|
| 937 |
+
"bbox": [
|
| 938 |
+
507,
|
| 939 |
+
659,
|
| 940 |
+
672,
|
| 941 |
+
674
|
| 942 |
+
],
|
| 943 |
+
"page_idx": 5
|
| 944 |
+
},
|
| 945 |
+
{
|
| 946 |
+
"type": "text",
|
| 947 |
+
"text": "To comprehensively examine specific correlations between lyrics and dance motions across our entire dataset, we utilize high NPMI values in all lyric-motion pairs without dividing the dataset into training and test sets. We analyze these pairs using two types of lyric codebooks, $x_w$ for word-level and $x_s$ for sentence-level lyrics, alongside one codebook, $y$ , for dance motions. To obtain each codebook, we applied k-means clustering to each set of lyric or motion vectors across varying codebook sizes from 500 to 7000, in increments of 500, to explore different levels of granularity and find the most effective categorization for capturing nuanced relationships.",
|
| 948 |
+
"bbox": [
|
| 949 |
+
507,
|
| 950 |
+
680,
|
| 951 |
+
884,
|
| 952 |
+
888
|
| 953 |
+
],
|
| 954 |
+
"page_idx": 5
|
| 955 |
+
},
|
| 956 |
+
{
|
| 957 |
+
"type": "text",
|
| 958 |
+
"text": "For each codebook size combination, we calculated NPMI values between all cluster pairs, one",
|
| 959 |
+
"bbox": [
|
| 960 |
+
507,
|
| 961 |
+
889,
|
| 962 |
+
882,
|
| 963 |
+
921
|
| 964 |
+
],
|
| 965 |
+
"page_idx": 5
|
| 966 |
+
},
|
| 967 |
+
{
|
| 968 |
+
"type": "page_number",
|
| 969 |
+
"text": "7906",
|
| 970 |
+
"bbox": [
|
| 971 |
+
480,
|
| 972 |
+
927,
|
| 973 |
+
519,
|
| 974 |
+
940
|
| 975 |
+
],
|
| 976 |
+
"page_idx": 5
|
| 977 |
+
},
|
| 978 |
+
{
|
| 979 |
+
"type": "table",
|
| 980 |
+
"img_path": "images/5362851f93cfef43622b910b890e0a05b1b91422fcec59618ff5d01d0193b0c2.jpg",
|
| 981 |
+
"table_caption": [
|
| 982 |
+
"Table 1: Numbers and percentages of songs, bars, and frames with positive NPMI values."
|
| 983 |
+
],
|
| 984 |
+
"table_footnote": [],
|
| 985 |
+
"table_body": "<table><tr><td>Entity</td><td>In the case of NPMI(xw,y)</td><td>In the case of NPMI(xs,y)</td></tr><tr><td rowspan=\"2\">Number of songs</td><td>781 songs</td><td>980 songs</td></tr><tr><td>(78.10%)</td><td>(98.00%)</td></tr><tr><td rowspan=\"2\">Number of bars</td><td>2,671 bars</td><td>16,161 bars</td></tr><tr><td>(2.88%)</td><td>(17.43%)</td></tr><tr><td rowspan=\"2\">Number of frames</td><td>14,431 frames</td><td>91,593 frames</td></tr><tr><td>(0.03%)</td><td>(1.99%)</td></tr></table>",
|
| 986 |
+
"bbox": [
|
| 987 |
+
119,
|
| 988 |
+
120,
|
| 989 |
+
485,
|
| 990 |
+
237
|
| 991 |
+
],
|
| 992 |
+
"page_idx": 6
|
| 993 |
+
},
|
| 994 |
+
{
|
| 995 |
+
"type": "text",
|
| 996 |
+
"text": "from the motion codebook and one from the lyrics codebook. This helped us detect the optimal granularity that maximizes the meaningful mutual information. Specifically, our highest NPMI values were 0.82 for word-level analysis (with a codebook size of 5000 for lyrics and 4500 for motions) and 0.93 for sentence-level analysis (with a codebook size of 6500 for both lyrics and motions). We use these codebook sizes for the following analysis.",
|
| 997 |
+
"bbox": [
|
| 998 |
+
112,
|
| 999 |
+
258,
|
| 1000 |
+
489,
|
| 1001 |
+
403
|
| 1002 |
+
],
|
| 1003 |
+
"page_idx": 6
|
| 1004 |
+
},
|
| 1005 |
+
{
|
| 1006 |
+
"type": "text",
|
| 1007 |
+
"text": "4.6 Analysis Results",
|
| 1008 |
+
"text_level": 1,
|
| 1009 |
+
"bbox": [
|
| 1010 |
+
112,
|
| 1011 |
+
414,
|
| 1012 |
+
290,
|
| 1013 |
+
428
|
| 1014 |
+
],
|
| 1015 |
+
"page_idx": 6
|
| 1016 |
+
},
|
| 1017 |
+
{
|
| 1018 |
+
"type": "text",
|
| 1019 |
+
"text": "Table 1 shows that while the majority of songs (78.1% for word-level analysis and 98.0% for sentence-level analysis) have positive NPMI values, indicating strong lyrics-motion relationships, the numbers of bars and frames with positive NPMI values are quite small. This suggests that meaningful interactions between lyrics and dance motions are localized to specific parts of songs, supporting our hypothesis that meaningful relationships, while present, are not uniformly distributed across songs.",
|
| 1020 |
+
"bbox": [
|
| 1021 |
+
112,
|
| 1022 |
+
432,
|
| 1023 |
+
489,
|
| 1024 |
+
594
|
| 1025 |
+
],
|
| 1026 |
+
"page_idx": 6
|
| 1027 |
+
},
|
| 1028 |
+
{
|
| 1029 |
+
"type": "text",
|
| 1030 |
+
"text": "The examples shown in Figure 3 were selected from musical bars with positive NPMI values to illustrate specific relationships as we interpreted them. For example, emotional expressions in lyrics, such as \"love\", are often translated into heart-shaped gestures in dance. Sentence-level correlations show clear patterns, such as \"getting lost\" with a peering motion, demonstrating how broader narrative elements within lyrics can influence dance motions. See Appendix D for other examples of interpretable relationships between lyrics and corresponding dance motions. This analysis confirms that our data-driven method can uncover intuitive relationships between lyrics and dance motions.",
|
| 1031 |
+
"bbox": [
|
| 1032 |
+
112,
|
| 1033 |
+
595,
|
| 1034 |
+
489,
|
| 1035 |
+
820
|
| 1036 |
+
],
|
| 1037 |
+
"page_idx": 6
|
| 1038 |
+
},
|
| 1039 |
+
{
|
| 1040 |
+
"type": "text",
|
| 1041 |
+
"text": "5Lyrics-to-Dance Motion Retrieval",
|
| 1042 |
+
"text_level": 1,
|
| 1043 |
+
"bbox": [
|
| 1044 |
+
112,
|
| 1045 |
+
832,
|
| 1046 |
+
438,
|
| 1047 |
+
848
|
| 1048 |
+
],
|
| 1049 |
+
"page_idx": 6
|
| 1050 |
+
},
|
| 1051 |
+
{
|
| 1052 |
+
"type": "text",
|
| 1053 |
+
"text": "We developed a method that allows input of a single musical bar of lyrics to retrieve the corresponding bar of dance motions. The input is a sequence of lyric words within a musical bar. The retrieved",
|
| 1054 |
+
"bbox": [
|
| 1055 |
+
112,
|
| 1056 |
+
857,
|
| 1057 |
+
487,
|
| 1058 |
+
921
|
| 1059 |
+
],
|
| 1060 |
+
"page_idx": 6
|
| 1061 |
+
},
|
| 1062 |
+
{
|
| 1063 |
+
"type": "text",
|
| 1064 |
+
"text": "output is a ranked list of musical bars containing the corresponding motions.",
|
| 1065 |
+
"bbox": [
|
| 1066 |
+
507,
|
| 1067 |
+
84,
|
| 1068 |
+
880,
|
| 1069 |
+
115
|
| 1070 |
+
],
|
| 1071 |
+
"page_idx": 6
|
| 1072 |
+
},
|
| 1073 |
+
{
|
| 1074 |
+
"type": "text",
|
| 1075 |
+
"text": "For the retrieval task, we use Dynamic Time Warping (DTW) (Berndt and Clifford, 1994) to measure the similarity between the input lyrics and available dance motions. DTW is ideal for handling time series data of varying lengths and for capturing partial frame similarities, thereby accounting for how lyrics and dance motions relate within a bar.",
|
| 1076 |
+
"bbox": [
|
| 1077 |
+
507,
|
| 1078 |
+
117,
|
| 1079 |
+
882,
|
| 1080 |
+
228
|
| 1081 |
+
],
|
| 1082 |
+
"page_idx": 6
|
| 1083 |
+
},
|
| 1084 |
+
{
|
| 1085 |
+
"type": "text",
|
| 1086 |
+
"text": "In our DTW implementation, we derive the cost matrix from the Normalized Pointwise Mutual Information (NPMI) between the lyric and dance motion codebooks. The substitution (match) cost between elements $x$ and $y$ is defined as",
|
| 1087 |
+
"bbox": [
|
| 1088 |
+
507,
|
| 1089 |
+
229,
|
| 1090 |
+
882,
|
| 1091 |
+
309
|
| 1092 |
+
],
|
| 1093 |
+
"page_idx": 6
|
| 1094 |
+
},
|
| 1095 |
+
{
|
| 1096 |
+
"type": "equation",
|
| 1097 |
+
"text": "\n$$\n\\operatorname {c o s t} (x, y) = 1 - \\left(1 + \\operatorname {N P M I} (x, y)\\right) / 2. \\tag {6}\n$$\n",
|
| 1098 |
+
"text_format": "latex",
|
| 1099 |
+
"bbox": [
|
| 1100 |
+
534,
|
| 1101 |
+
317,
|
| 1102 |
+
882,
|
| 1103 |
+
335
|
| 1104 |
+
],
|
| 1105 |
+
"page_idx": 6
|
| 1106 |
+
},
|
| 1107 |
+
{
|
| 1108 |
+
"type": "text",
|
| 1109 |
+
"text": "This formula assigns lower costs when there is a stronger relationship between $x$ and $y$ (i.e., higher NPMI values) and assigns higher costs when the relationship is weaker. For insertions and deletions, we assign a fixed cost of 1. This means that when aligning sequences, inserting or deleting an element incurs a constant penalty, regardless of the specific elements involved. This use of a fixed cost was based on preliminary experiments showing that using variable costs degraded performance in the retrieval task.",
|
| 1110 |
+
"bbox": [
|
| 1111 |
+
505,
|
| 1112 |
+
342,
|
| 1113 |
+
882,
|
| 1114 |
+
517
|
| 1115 |
+
],
|
| 1116 |
+
"page_idx": 6
|
| 1117 |
+
},
|
| 1118 |
+
{
|
| 1119 |
+
"type": "text",
|
| 1120 |
+
"text": "We implemented two retrieval strategies: word-to-dance motion retrieval (W2D) based on the word-to-motion $\\mathrm{NPMI}(x_w, y)$ and sentence-to-dance motion retrieval (S2D) based on the sentence-to-motion $\\mathrm{NPMI}(x_s, y)$ .",
|
| 1121 |
+
"bbox": [
|
| 1122 |
+
507,
|
| 1123 |
+
519,
|
| 1124 |
+
882,
|
| 1125 |
+
598
|
| 1126 |
+
],
|
| 1127 |
+
"page_idx": 6
|
| 1128 |
+
},
|
| 1129 |
+
{
|
| 1130 |
+
"type": "text",
|
| 1131 |
+
"text": "In this section, our dataset comprised 868 songs and was divided into 78,875 bars (85%) for training, 5,237 bars (5%) for development, and 8,611 bars (10%) for testing. Unlike Section 4 where models were trained on the full dataset, here we specifically trained the motion encoder and decoder, and conducted k-means clustering from scratch using only the training subset. In training the motion encoder, we utilized early stopping based on the development subset, ensuring precision in model tuning. We confined k-means clustering and NPMI calculations strictly to the training data, guaranteeing that these models were accurately calibrated for the specific tasks described in this section. Additionally, we adjusted the codebook sizes for k-means clustering, with optimal sizes of 6000 for lyrics and 7000 for motions in the W2D method, and 3500 for lyrics and 7000 for motions in the S2D method.",
|
| 1132 |
+
"bbox": [
|
| 1133 |
+
507,
|
| 1134 |
+
599,
|
| 1135 |
+
884,
|
| 1136 |
+
888
|
| 1137 |
+
],
|
| 1138 |
+
"page_idx": 6
|
| 1139 |
+
},
|
| 1140 |
+
{
|
| 1141 |
+
"type": "text",
|
| 1142 |
+
"text": "As there are no existing methods specifically designed for lyrics-to-dance motion retrieval, to eval",
|
| 1143 |
+
"bbox": [
|
| 1144 |
+
507,
|
| 1145 |
+
889,
|
| 1146 |
+
882,
|
| 1147 |
+
920
|
| 1148 |
+
],
|
| 1149 |
+
"page_idx": 6
|
| 1150 |
+
},
|
| 1151 |
+
{
|
| 1152 |
+
"type": "page_number",
|
| 1153 |
+
"text": "7907",
|
| 1154 |
+
"bbox": [
|
| 1155 |
+
480,
|
| 1156 |
+
927,
|
| 1157 |
+
519,
|
| 1158 |
+
940
|
| 1159 |
+
],
|
| 1160 |
+
"page_idx": 6
|
| 1161 |
+
},
|
| 1162 |
+
{
|
| 1163 |
+
"type": "image",
|
| 1164 |
+
"img_path": "images/955bc1ec148d5e7b357d1cac3a95dd56e1f165fed1a79902f6828ed2876f6ad0.jpg",
|
| 1165 |
+
"image_caption": [],
|
| 1166 |
+
"image_footnote": [],
|
| 1167 |
+
"bbox": [
|
| 1168 |
+
193,
|
| 1169 |
+
87,
|
| 1170 |
+
868,
|
| 1171 |
+
200
|
| 1172 |
+
],
|
| 1173 |
+
"page_idx": 7
|
| 1174 |
+
},
|
| 1175 |
+
{
|
| 1176 |
+
"type": "image",
|
| 1177 |
+
"img_path": "images/e470a8ace07616356980655383f2fc33b3fee9d1b18c061d3d6450bdea979460.jpg",
|
| 1178 |
+
"image_caption": [
|
| 1179 |
+
"Figure 3: Examples of lyric and dance motion relationships from bars with positive NPMI values. The two examples include one that shows word-level correlations and one that shows sentence-level correlations. Each example pairs a synchronized lyric translated into English (with the original Japanese in parentheses) with its corresponding dance motion. The term \"FUNC\" indicates Japanese functional words that defy easy translation. NPMI values are indicated by visual cues: negative values are indicated by a minus sign in an orange box, positive values by a plus sign in a blue box, and zero values by 0."
|
| 1180 |
+
],
|
| 1181 |
+
"image_footnote": [],
|
| 1182 |
+
"bbox": [
|
| 1183 |
+
117,
|
| 1184 |
+
202,
|
| 1185 |
+
868,
|
| 1186 |
+
297
|
| 1187 |
+
],
|
| 1188 |
+
"page_idx": 7
|
| 1189 |
+
},
|
| 1190 |
+
{
|
| 1191 |
+
"type": "table",
|
| 1192 |
+
"img_path": "images/3d191205ccfa7a5dd915d1e507d6e7691609c4ffaf46a28b05bb89aa7f45cecc.jpg",
|
| 1193 |
+
"table_caption": [
|
| 1194 |
+
"Table 2:Lyrics-to-dance motion retrieval comparison."
|
| 1195 |
+
],
|
| 1196 |
+
"table_footnote": [],
|
| 1197 |
+
"table_body": "<table><tr><td>Method</td><td>MRR↑</td><td>1/MRR↓</td></tr><tr><td>Random</td><td>0.00113</td><td>884</td></tr><tr><td>Contrastive learning</td><td>0.00151</td><td>663</td></tr><tr><td>Proposed method (W2D)</td><td>0.01905</td><td>53</td></tr><tr><td>Proposed method (S2D)</td><td>0.01837</td><td>54</td></tr></table>",
|
| 1198 |
+
"bbox": [
|
| 1199 |
+
121,
|
| 1200 |
+
441,
|
| 1201 |
+
480,
|
| 1202 |
+
513
|
| 1203 |
+
],
|
| 1204 |
+
"page_idx": 7
|
| 1205 |
+
},
|
| 1206 |
+
{
|
| 1207 |
+
"type": "text",
|
| 1208 |
+
"text": "uate our lyrics-to-dance motion retrieval method we devised a baseline using contrastive learning techniques adapted from text-to-motion retrieval studies (Yu et al., 2024; Horie et al., 2023; Petrovich et al., 2023; Tevet et al., 2022). Additionally, we included a basic random selection method that randomly selects dance motions from the test set. For the contrastive learning method, sentence vectors from a bar, generated by pre-trained SentenceBERT, were first averaged into a 384-dimensional vector and then compressed to a 256-dimensional vector using an MLP. Dance motion feature vectors from our pre-trained motion encoder were similarly processed into a sequence of 256-dimensional vectors by an MLP. Those vectors were then averaged into a single 256-dimensional vector via mean pooling. We fixed the parameters for Sentence-BERT and the motion encoder, focusing training on the MLP parameters using a contrastive loss function to distinguish between matching and non-matching pairs. Training employed the AdamW optimizer with a mini-batch size of 16 over 200 epochs, with early stopping triggered after 10 epochs without",
|
| 1209 |
+
"bbox": [
|
| 1210 |
+
112,
|
| 1211 |
+
551,
|
| 1212 |
+
489,
|
| 1213 |
+
921
|
| 1214 |
+
],
|
| 1215 |
+
"page_idx": 7
|
| 1216 |
+
},
|
| 1217 |
+
{
|
| 1218 |
+
"type": "text",
|
| 1219 |
+
"text": "improvement in development loss.",
|
| 1220 |
+
"bbox": [
|
| 1221 |
+
507,
|
| 1222 |
+
420,
|
| 1223 |
+
764,
|
| 1224 |
+
435
|
| 1225 |
+
],
|
| 1226 |
+
"page_idx": 7
|
| 1227 |
+
},
|
| 1228 |
+
{
|
| 1229 |
+
"type": "text",
|
| 1230 |
+
"text": "The experimental results presented in Table 2 show that while the contrastive learning method slightly outperforms the random method, both of our proposed methods achieve improvements, with Mean Reciprocal Ranks (MRRs) of 0.019 for W2D and 0.018 for S2D. This performance indicates that for a total of 8,611 bars, W2D ranks the correct dance motions within the top 53 positions on average, while S2D ranks them within the top 54 positions. A statistical t-test between W2D and S2D results yields a p-value of 0.480, indicating no significant difference between the two methods. These results support our hypothesis that while not all lyrics and dance motions share inherent relationships, there are meaningful relationships in certain instances.",
|
| 1231 |
+
"bbox": [
|
| 1232 |
+
507,
|
| 1233 |
+
437,
|
| 1234 |
+
884,
|
| 1235 |
+
694
|
| 1236 |
+
],
|
| 1237 |
+
"page_idx": 7
|
| 1238 |
+
},
|
| 1239 |
+
{
|
| 1240 |
+
"type": "text",
|
| 1241 |
+
"text": "6 Conclusion and Future Work",
|
| 1242 |
+
"text_level": 1,
|
| 1243 |
+
"bbox": [
|
| 1244 |
+
507,
|
| 1245 |
+
713,
|
| 1246 |
+
794,
|
| 1247 |
+
728
|
| 1248 |
+
],
|
| 1249 |
+
"page_idx": 7
|
| 1250 |
+
},
|
| 1251 |
+
{
|
| 1252 |
+
"type": "text",
|
| 1253 |
+
"text": "This paper introduces a novel method for quantifying the relationship between lyrics and dance motions that uses co-occurrence frequency. Our method effectively detects where lyrics correlate with dance motions and was validated by identifying meaningful relationships, such as the association between \"love\" and heart-shaped motions, and outperforming a previous method in a lyrics-to-dance motion retrieval task.",
|
| 1254 |
+
"bbox": [
|
| 1255 |
+
507,
|
| 1256 |
+
743,
|
| 1257 |
+
884,
|
| 1258 |
+
885
|
| 1259 |
+
],
|
| 1260 |
+
"page_idx": 7
|
| 1261 |
+
},
|
| 1262 |
+
{
|
| 1263 |
+
"type": "text",
|
| 1264 |
+
"text": "The success of this method not only impacts fields such as lyrics information processing, dance",
|
| 1265 |
+
"bbox": [
|
| 1266 |
+
507,
|
| 1267 |
+
889,
|
| 1268 |
+
882,
|
| 1269 |
+
921
|
| 1270 |
+
],
|
| 1271 |
+
"page_idx": 7
|
| 1272 |
+
},
|
| 1273 |
+
{
|
| 1274 |
+
"type": "page_number",
|
| 1275 |
+
"text": "7908",
|
| 1276 |
+
"bbox": [
|
| 1277 |
+
480,
|
| 1278 |
+
927,
|
| 1279 |
+
519,
|
| 1280 |
+
940
|
| 1281 |
+
],
|
| 1282 |
+
"page_idx": 7
|
| 1283 |
+
},
|
| 1284 |
+
{
|
| 1285 |
+
"type": "text",
|
| 1286 |
+
"text": "information processing, music information retrieval, and computer vision but also opens promising avenues for interdisciplinary studies and enhances the integration of text content into choreographic design. Future work will aim to integrate our method into existing audio-to-dance motion retrieval methods to improve their accuracy in matching dance motions with lyrics.",
|
| 1287 |
+
"bbox": [
|
| 1288 |
+
112,
|
| 1289 |
+
84,
|
| 1290 |
+
492,
|
| 1291 |
+
212
|
| 1292 |
+
],
|
| 1293 |
+
"page_idx": 8
|
| 1294 |
+
},
|
| 1295 |
+
{
|
| 1296 |
+
"type": "text",
|
| 1297 |
+
"text": "7 Limitations",
|
| 1298 |
+
"text_level": 1,
|
| 1299 |
+
"bbox": [
|
| 1300 |
+
112,
|
| 1301 |
+
225,
|
| 1302 |
+
250,
|
| 1303 |
+
240
|
| 1304 |
+
],
|
| 1305 |
+
"page_idx": 8
|
| 1306 |
+
},
|
| 1307 |
+
{
|
| 1308 |
+
"type": "text",
|
| 1309 |
+
"text": "First, the dataset we used to evaluate our method's performance contained predominantly Japanese songs and thus may not represent the global musical landscape. Although our approach is adaptable, its generalizability needs to be validated with different linguistic inputs in the future. The dance motions in our dataset were mainly in the style of Japanese popular music and lacked the diversity of dance styles such as breakdance or street dance. Nonetheless, our method can theoretically be applied to other styles.",
|
| 1310 |
+
"bbox": [
|
| 1311 |
+
112,
|
| 1312 |
+
250,
|
| 1313 |
+
489,
|
| 1314 |
+
426
|
| 1315 |
+
],
|
| 1316 |
+
"page_idx": 8
|
| 1317 |
+
},
|
| 1318 |
+
{
|
| 1319 |
+
"type": "text",
|
| 1320 |
+
"text": "Second, while our method successfully identifies many significant lyric-dance associations, not all relationships are easily interpretable. This limitation highlights the challenges of using purely data-driven approaches without additional contextual or cultural insights. Additionally, due to computational constraints, we used the multilingual Sentence-BERT for lyric analysis, forgoing more advanced large-scale language models that might enhance our method's performance. Our study paved the way for future research using such computationally intensive models.",
|
| 1321 |
+
"bbox": [
|
| 1322 |
+
112,
|
| 1323 |
+
428,
|
| 1324 |
+
489,
|
| 1325 |
+
621
|
| 1326 |
+
],
|
| 1327 |
+
"page_idx": 8
|
| 1328 |
+
},
|
| 1329 |
+
{
|
| 1330 |
+
"type": "text",
|
| 1331 |
+
"text": "8 Acknowledgments",
|
| 1332 |
+
"text_level": 1,
|
| 1333 |
+
"bbox": [
|
| 1334 |
+
112,
|
| 1335 |
+
633,
|
| 1336 |
+
307,
|
| 1337 |
+
649
|
| 1338 |
+
],
|
| 1339 |
+
"page_idx": 8
|
| 1340 |
+
},
|
| 1341 |
+
{
|
| 1342 |
+
"type": "text",
|
| 1343 |
+
"text": "This work was supported in part by JST CREST Grant Number JPMJCR20D4, Japan.",
|
| 1344 |
+
"bbox": [
|
| 1345 |
+
112,
|
| 1346 |
+
659,
|
| 1347 |
+
487,
|
| 1348 |
+
690
|
| 1349 |
+
],
|
| 1350 |
+
"page_idx": 8
|
| 1351 |
+
},
|
| 1352 |
+
{
|
| 1353 |
+
"type": "text",
|
| 1354 |
+
"text": "References",
|
| 1355 |
+
"text_level": 1,
|
| 1356 |
+
"bbox": [
|
| 1357 |
+
115,
|
| 1358 |
+
718,
|
| 1359 |
+
213,
|
| 1360 |
+
732
|
| 1361 |
+
],
|
| 1362 |
+
"page_idx": 8
|
| 1363 |
+
},
|
| 1364 |
+
{
|
| 1365 |
+
"type": "list",
|
| 1366 |
+
"sub_type": "ref_text",
|
| 1367 |
+
"list_items": [
|
| 1368 |
+
"Ho Yin Au, Jie Chen, Junkun Jiang, and Yike Guo. 2022. ChoreoGraph: Music-conditioned automatic dance choreography over a style and tempo consistent dynamic graph. In The 30th ACM International Conference on Multimedia, MM 2022, pages 3917-3925.",
|
| 1369 |
+
"Donald J Berndt and James Clifford. 1994. Using dynamic time warping to find patterns in time series. In Proceedings of Workshop on Knowledge Discovery in Databases, pages 359-370.",
|
| 1370 |
+
"Uttaran Bhattacharya, Trisha Mittal, Rohan Chandra, Tanmay Randhavane, Aniket Bera, and Dinesh"
|
| 1371 |
+
],
|
| 1372 |
+
"bbox": [
|
| 1373 |
+
115,
|
| 1374 |
+
740,
|
| 1375 |
+
489,
|
| 1376 |
+
919
|
| 1377 |
+
],
|
| 1378 |
+
"page_idx": 8
|
| 1379 |
+
},
|
| 1380 |
+
{
|
| 1381 |
+
"type": "list",
|
| 1382 |
+
"sub_type": "ref_text",
|
| 1383 |
+
"list_items": [
|
| 1384 |
+
"Manocha. 2020a. STEP: Spatial temporal graph convolutional networks for emotion perception from gaits. In Proceedings of the Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, pages 1342-1350.",
|
| 1385 |
+
"Uttaran Bhattacharya, Nicholas Rewkowski, Abhishek Banerjee, Pooja Guhan, Aniket Bera, and Dinesh Manocha. 2021. Text2Gestures: A transformer-based network for generating emotive body gestures for virtual agents. In IEEE Virtual Reality and 3D User Interfaces, VR 2021, pages 160-169.",
|
| 1386 |
+
"Uttaran Bhattacharya, Nicholas Rewkowski, Pooja Guhan, Niall L. Williams, Trisha Mittal, Aniket Bera, and Dinesh Manocha. 2020b. Generating emotive gaits for virtual agents using affect-based autoregression. In 2020 IEEE International Symposium on Mixed and Augmented Reality, ISMAR 2020, pages 24-35.",
|
| 1387 |
+
"Sebastian Böck, Florian Krebs, and Gerhard Widmer. 2016. Joint beat and downbeat tracking with recurrent neural networks. In Proceedings of the 17th International Society for Music Information Retrieval Conference, ISMIR 2016, pages 255-261.",
|
| 1388 |
+
"Kang Chen, Zhipeng Tan, Jin Lei, Song-Hai Zhang, Yuan-Chen Guo, Weidong Zhang, and Shi-Min Hu. 2021. ChoreoMaster: Choreography-oriented music-driven dance synthesis. ACM Trans. Graph., 40(4):145:1-145:13.",
|
| 1389 |
+
"Arthur Crenn, Rizwan Ahmed Khan, Alexandre Meyer, and Säïda Bouakaz. 2016. Body expression recognition from animated 3D skeleton. In International Conference on 3D Imaging, IC3D 2016, pages 1-7.",
|
| 1390 |
+
"Rishabh Dabral, Muhammad Hamza Mughal, Vladislav Golyanik, and Christian Theobalt. 2023. MoFusion: A framework for denoising-diffusion-based motion synthesis. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2023, pages 9760-9770.",
|
| 1391 |
+
"Jibin Gao, Junfu Pu, Honglun Zhang, Ying Shan, and Wei-Shi Zheng. 2022. PC-Dance: Posture-controllable music-driven dance synthesis. In The 30th ACM International Conference on Multimedia, MM 2022, pages 1261-1269.",
|
| 1392 |
+
"Adi Haviv, Ori Ram, Ofir Press, Peter Izsak, and Omer Levy. 2022. Transformer language models without positional encodings still learn positional information. In *Findings of the Association for Computational Linguistics: EMNLP* 2022, pages 1382-1390.",
|
| 1393 |
+
"Junpei Horie, Wataru Noguchi, Hiroyuki Iizuka, and Masahito Yamamoto. 2023. Learning shared embedding representation of motion and text using contrastive learning. Artif. Life Robotics, 28(1):148-157.",
|
| 1394 |
+
"Biao Jiang, Xin Chen, Wen Liu, Jingyi Yu, Gang Yu, and Tao Chen. 2023. MotionGPT: Human motion as a foreign language. In Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023."
|
| 1395 |
+
],
|
| 1396 |
+
"bbox": [
|
| 1397 |
+
510,
|
| 1398 |
+
85,
|
| 1399 |
+
884,
|
| 1400 |
+
919
|
| 1401 |
+
],
|
| 1402 |
+
"page_idx": 8
|
| 1403 |
+
},
|
| 1404 |
+
{
|
| 1405 |
+
"type": "page_number",
|
| 1406 |
+
"text": "7909",
|
| 1407 |
+
"bbox": [
|
| 1408 |
+
480,
|
| 1409 |
+
928,
|
| 1410 |
+
519,
|
| 1411 |
+
940
|
| 1412 |
+
],
|
| 1413 |
+
"page_idx": 8
|
| 1414 |
+
},
|
| 1415 |
+
{
|
| 1416 |
+
"type": "list",
|
| 1417 |
+
"sub_type": "ref_text",
|
| 1418 |
+
"list_items": [
|
| 1419 |
+
"Andrea Kleinsmith and Nadia Bianchi-Berthouze. 2013. Affective body expression perception and recognition: A survey. IEEE Trans. Affect. Comput., 4(1):15-33.",
|
| 1420 |
+
"Ronghui Li, Yuxiang Zhang, Yachao Zhang, Hongwen Zhang, Jie Guo, Yan Zhang, Yebin Liu, and Xiu Li. 2024. Lodge: A coarse to fine diffusion network for long dance generation guided by the characteristic dance primitives. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2024, pages 1524-1534.",
|
| 1421 |
+
"Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization. In Proceedings of the 7th International Conference on Learning Representations, ICLR 2019.",
|
| 1422 |
+
"Zhenye Luo, Min Ren, Xuecai Hu, Yongzhen Huang, and Li Yao. 2024. POPDG: Popular 3D dance generation with PopDanceSet. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2024, pages 26984-26993.",
|
| 1423 |
+
"Tomoyasu Nakano and Masataka Goto. 2016. LyricListPlayer: A consecutive-query-by-playback interface for retrieving similar word sequences from different song lyrics. In Proceedings of the 13th Sound and Music Computing Conference, SMC 2016, pages 344-349.",
|
| 1424 |
+
"Mathis Petrovich, Michael J. Black, and Gül Varol. 2023. TMR: Text-to-motion retrieval using contrastive 3D human motion synthesis. In IEEE/CVF International Conference on Computer Vision, ICCV 2023, pages 9454-9463.",
|
| 1425 |
+
"Qiaosong Qi, Le Zhuo, Aixi Zhang, Yue Liao, Fei Fang, Si Liu, and Shuicheng Yan. 2023. DiffDance: Cascaded human motion diffusion model for dance generation. In Proceedings of the 31st ACM International Conference on Multimedia, MM 2023, pages 1374-1382.",
|
| 1426 |
+
"Haocong Rao and Chunyan Miao. 2023. TranSG: Transformer-based skeleton graph prototype contrastive learning with structure-trajectory prompted reconstruction for person re-identification. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2023, pages 22118-22128.",
|
| 1427 |
+
"Nils Reimers and Iryna Gurevych. 2019. SentenceBERT: Sentence embeddings using siamese BERTnetworks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, pages 3980-3990.",
|
| 1428 |
+
"Olaf Ronneberger, Philipp Fischer, and Thomas Brox. 2015. U-Net: Convolutional networks for biomedical image segmentation. In Medical Image Computing and Computer-Assisted Intervention, MICCAI 2015, volume 9351, pages 234-241."
|
| 1429 |
+
],
|
| 1430 |
+
"bbox": [
|
| 1431 |
+
115,
|
| 1432 |
+
85,
|
| 1433 |
+
489,
|
| 1434 |
+
919
|
| 1435 |
+
],
|
| 1436 |
+
"page_idx": 9
|
| 1437 |
+
},
|
| 1438 |
+
{
|
| 1439 |
+
"type": "list",
|
| 1440 |
+
"sub_type": "ref_text",
|
| 1441 |
+
"list_items": [
|
| 1442 |
+
"Guy Tevet, Brian Gordon, Amir Hertz, Amit H. Bermano, and Daniel Cohen-Or. 2022. MotionCLIP: Exposing human motion generation to CLIP space. In European Conference on Computer Vision, ECCV 2022, volume 13682 of Lecture Notes in Computer Science, pages 358-374.",
|
| 1443 |
+
"Jonathan Tseng, Rodrigo Castellon, and C. Karen Liu. 2023. EDGE: Editable dance generation from music. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2023, pages 448-458.",
|
| 1444 |
+
"Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, pages 5998-6008.",
|
| 1445 |
+
"Qing Yu, Mikihiro Tanaka, and Kent Fujiwara. 2024. Exploring vision transformers for 3D human motion-language models with motion patches. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2024, pages 937-946.",
|
| 1446 |
+
"Canyu Zhang, Youbao Tang, Ning Zhang, Ruei-Sung Lin, Mei Han, Jing Xiao, and Song Wang. 2024. Bidirectional autoregressive diffusion model for dance generation. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2024, pages 687-696.",
|
| 1447 |
+
"Wentao Zhu, Xiaoxuan Ma, Dongwoo Ro, Hai Ci, Jinlu Zhang, Jiaxin Shi, Feng Gao, Qi Tian, and Yizhou Wang. 2024. Human motion generation: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 46(4):2430-2449."
|
| 1448 |
+
],
|
| 1449 |
+
"bbox": [
|
| 1450 |
+
510,
|
| 1451 |
+
85,
|
| 1452 |
+
882,
|
| 1453 |
+
563
|
| 1454 |
+
],
|
| 1455 |
+
"page_idx": 9
|
| 1456 |
+
},
|
| 1457 |
+
{
|
| 1458 |
+
"type": "text",
|
| 1459 |
+
"text": "A Implementation Details",
|
| 1460 |
+
"text_level": 1,
|
| 1461 |
+
"bbox": [
|
| 1462 |
+
510,
|
| 1463 |
+
586,
|
| 1464 |
+
749,
|
| 1465 |
+
602
|
| 1466 |
+
],
|
| 1467 |
+
"page_idx": 9
|
| 1468 |
+
},
|
| 1469 |
+
{
|
| 1470 |
+
"type": "text",
|
| 1471 |
+
"text": "The training of the dance motion encoder and decoder was implemented using PyTorch<sup>3</sup> and conducted on an NVIDIA V100 GPU. The total number of trainable parameters for the dance motion encoder-decoder is 5,858,221. For clustering, we employed the k-means algorithm implemented in scikit-learn<sup>4</sup>, and statistical significance testing was performed using the t-test function from scipy<sup>5</sup>. Due to computational resource constraints, all results presented in this paper are based on a single run of the experiments.",
|
| 1472 |
+
"bbox": [
|
| 1473 |
+
509,
|
| 1474 |
+
611,
|
| 1475 |
+
882,
|
| 1476 |
+
788
|
| 1477 |
+
],
|
| 1478 |
+
"page_idx": 9
|
| 1479 |
+
},
|
| 1480 |
+
{
|
| 1481 |
+
"type": "text",
|
| 1482 |
+
"text": "B Human Skeletal Model and Affective Features",
|
| 1483 |
+
"text_level": 1,
|
| 1484 |
+
"bbox": [
|
| 1485 |
+
510,
|
| 1486 |
+
799,
|
| 1487 |
+
863,
|
| 1488 |
+
831
|
| 1489 |
+
],
|
| 1490 |
+
"page_idx": 9
|
| 1491 |
+
},
|
| 1492 |
+
{
|
| 1493 |
+
"type": "text",
|
| 1494 |
+
"text": "The left side of Figure 4 illustrates the human skeletal model with 53 joints, while the right side",
|
| 1495 |
+
"bbox": [
|
| 1496 |
+
510,
|
| 1497 |
+
841,
|
| 1498 |
+
880,
|
| 1499 |
+
873
|
| 1500 |
+
],
|
| 1501 |
+
"page_idx": 9
|
| 1502 |
+
},
|
| 1503 |
+
{
|
| 1504 |
+
"type": "page_footnote",
|
| 1505 |
+
"text": "<sup>3</sup>https://pytorch.org",
|
| 1506 |
+
"bbox": [
|
| 1507 |
+
532,
|
| 1508 |
+
881,
|
| 1509 |
+
685,
|
| 1510 |
+
892
|
| 1511 |
+
],
|
| 1512 |
+
"page_idx": 9
|
| 1513 |
+
},
|
| 1514 |
+
{
|
| 1515 |
+
"type": "page_footnote",
|
| 1516 |
+
"text": "<sup>4</sup>https://scikit-learn.org/stable/",
|
| 1517 |
+
"bbox": [
|
| 1518 |
+
532,
|
| 1519 |
+
894,
|
| 1520 |
+
781,
|
| 1521 |
+
906
|
| 1522 |
+
],
|
| 1523 |
+
"page_idx": 9
|
| 1524 |
+
},
|
| 1525 |
+
{
|
| 1526 |
+
"type": "page_footnote",
|
| 1527 |
+
"text": "<sup>5</sup>https://scipy.org",
|
| 1528 |
+
"bbox": [
|
| 1529 |
+
532,
|
| 1530 |
+
906,
|
| 1531 |
+
668,
|
| 1532 |
+
919
|
| 1533 |
+
],
|
| 1534 |
+
"page_idx": 9
|
| 1535 |
+
},
|
| 1536 |
+
{
|
| 1537 |
+
"type": "page_number",
|
| 1538 |
+
"text": "7910",
|
| 1539 |
+
"bbox": [
|
| 1540 |
+
480,
|
| 1541 |
+
927,
|
| 1542 |
+
519,
|
| 1543 |
+
940
|
| 1544 |
+
],
|
| 1545 |
+
"page_idx": 9
|
| 1546 |
+
},
|
| 1547 |
+
{
|
| 1548 |
+
"type": "image",
|
| 1549 |
+
"img_path": "images/96c6bc0fb38b28ab7a7ec0c8878549de6dcea61671f56280d3925bc99f2451ce.jpg",
|
| 1550 |
+
"image_caption": [
|
| 1551 |
+
"Figure 4: Detailed specifications of human skeletal model and affective features."
|
| 1552 |
+
],
|
| 1553 |
+
"image_footnote": [],
|
| 1554 |
+
"bbox": [
|
| 1555 |
+
126,
|
| 1556 |
+
80,
|
| 1557 |
+
361,
|
| 1558 |
+
281
|
| 1559 |
+
],
|
| 1560 |
+
"page_idx": 10
|
| 1561 |
+
},
|
| 1562 |
+
{
|
| 1563 |
+
"type": "image",
|
| 1564 |
+
"img_path": "images/89fc0ce3f90c933aea8c9a77fc1551d67a1e96e3e25bec3422b1082b6efbf555.jpg",
|
| 1565 |
+
"image_caption": [],
|
| 1566 |
+
"image_footnote": [],
|
| 1567 |
+
"bbox": [
|
| 1568 |
+
388,
|
| 1569 |
+
80,
|
| 1570 |
+
626,
|
| 1571 |
+
183
|
| 1572 |
+
],
|
| 1573 |
+
"page_idx": 10
|
| 1574 |
+
},
|
| 1575 |
+
{
|
| 1576 |
+
"type": "image",
|
| 1577 |
+
"img_path": "images/1adbe1701036bfc558f147e180c1aa9e91171102cfe22976179fb280b285e191.jpg",
|
| 1578 |
+
"image_caption": [],
|
| 1579 |
+
"image_footnote": [],
|
| 1580 |
+
"bbox": [
|
| 1581 |
+
635,
|
| 1582 |
+
80,
|
| 1583 |
+
747,
|
| 1584 |
+
183
|
| 1585 |
+
],
|
| 1586 |
+
"page_idx": 10
|
| 1587 |
+
},
|
| 1588 |
+
{
|
| 1589 |
+
"type": "image",
|
| 1590 |
+
"img_path": "images/383b723037729bc1911d7bed7b358326d27ed2eab38f8672b774587cfc273385.jpg",
|
| 1591 |
+
"image_caption": [],
|
| 1592 |
+
"image_footnote": [],
|
| 1593 |
+
"bbox": [
|
| 1594 |
+
763,
|
| 1595 |
+
80,
|
| 1596 |
+
863,
|
| 1597 |
+
183
|
| 1598 |
+
],
|
| 1599 |
+
"page_idx": 10
|
| 1600 |
+
},
|
| 1601 |
+
{
|
| 1602 |
+
"type": "image",
|
| 1603 |
+
"img_path": "images/50051902634c7f6782845680e472252de5e122c6a6fc4ed7584fa4d44a679aec.jpg",
|
| 1604 |
+
"image_caption": [],
|
| 1605 |
+
"image_footnote": [],
|
| 1606 |
+
"bbox": [
|
| 1607 |
+
394,
|
| 1608 |
+
184,
|
| 1609 |
+
628,
|
| 1610 |
+
280
|
| 1611 |
+
],
|
| 1612 |
+
"page_idx": 10
|
| 1613 |
+
},
|
| 1614 |
+
{
|
| 1615 |
+
"type": "image",
|
| 1616 |
+
"img_path": "images/41b7b7b45086492c25ced8aa81860d71c2c23e5ae0fd78e8bddc7adb3ccf08ba.jpg",
|
| 1617 |
+
"image_caption": [],
|
| 1618 |
+
"image_footnote": [],
|
| 1619 |
+
"bbox": [
|
| 1620 |
+
640,
|
| 1621 |
+
184,
|
| 1622 |
+
769,
|
| 1623 |
+
279
|
| 1624 |
+
],
|
| 1625 |
+
"page_idx": 10
|
| 1626 |
+
},
|
| 1627 |
+
{
|
| 1628 |
+
"type": "image",
|
| 1629 |
+
"img_path": "images/3ebf14d134544f458cc4e139370a733845a4a01d782dfcd864d02ef45e2a2620.jpg",
|
| 1630 |
+
"image_caption": [],
|
| 1631 |
+
"image_footnote": [],
|
| 1632 |
+
"bbox": [
|
| 1633 |
+
764,
|
| 1634 |
+
184,
|
| 1635 |
+
870,
|
| 1636 |
+
278
|
| 1637 |
+
],
|
| 1638 |
+
"page_idx": 10
|
| 1639 |
+
},
|
| 1640 |
+
{
|
| 1641 |
+
"type": "table",
|
| 1642 |
+
"img_path": "images/c242b8288a76325733fe01fa29959cac5962bb3955a513946092203784bd30f9.jpg",
|
| 1643 |
+
"table_caption": [
|
| 1644 |
+
"Table 3: Retrieval performance with skeletal and affective feature vectors. The codebook size in the table is displayed as a pair of values, with the first representing the codebook size for lyrics and the second representing the codebook size for motions."
|
| 1645 |
+
],
|
| 1646 |
+
"table_footnote": [],
|
| 1647 |
+
"table_body": "<table><tr><td>Retrieval strategy</td><td>Feature vector</td><td>Codebook size</td><td>MRR ↑</td><td>1/MRR ↓</td></tr><tr><td rowspan=\"3\">Word-to-Dance motion (W2D)</td><td>Both feature vectors</td><td>6000, 7000</td><td>0.01905</td><td>53</td></tr><tr><td>Skeletal feature vectors</td><td>6000, 5000</td><td>0.01641</td><td>61</td></tr><tr><td>Affective feature vectors</td><td>6000, 5000</td><td>0.01757</td><td>57</td></tr><tr><td rowspan=\"3\">Sentence-to-Dance motion (S2D)</td><td>Both feature vectors</td><td>3500, 7000</td><td>0.01837</td><td>54</td></tr><tr><td>Skeletal feature vectors</td><td>3000, 6500</td><td>0.01547</td><td>65</td></tr><tr><td>Affective feature vectors</td><td>5000, 4500</td><td>0.01475</td><td>68</td></tr></table>",
|
| 1648 |
+
"bbox": [
|
| 1649 |
+
144,
|
| 1650 |
+
375,
|
| 1651 |
+
853,
|
| 1652 |
+
494
|
| 1653 |
+
],
|
| 1654 |
+
"page_idx": 10
|
| 1655 |
+
},
|
| 1656 |
+
{
|
| 1657 |
+
"type": "text",
|
| 1658 |
+
"text": "displays 40 affective features designed to express emotional states through body language. These features include volume $(V)$ calculated from the distances between the extremal joint coordinates, length $(L)$ measured across ten different joint pairs, area $(A)$ derived from five types of triangles formed by triplets of joints, body joint curvature $(C)$ calculated from 14 different measurements, and finger joint curvature $(F)$ represented by ten different calculations that together quantify expressive body dynamics.",
|
| 1659 |
+
"bbox": [
|
| 1660 |
+
112,
|
| 1661 |
+
517,
|
| 1662 |
+
489,
|
| 1663 |
+
694
|
| 1664 |
+
],
|
| 1665 |
+
"page_idx": 10
|
| 1666 |
+
},
|
| 1667 |
+
{
|
| 1668 |
+
"type": "text",
|
| 1669 |
+
"text": "C Effectiveness of Combining Skeletal and Affective Feature Vectors",
|
| 1670 |
+
"text_level": 1,
|
| 1671 |
+
"bbox": [
|
| 1672 |
+
112,
|
| 1673 |
+
712,
|
| 1674 |
+
465,
|
| 1675 |
+
745
|
| 1676 |
+
],
|
| 1677 |
+
"page_idx": 10
|
| 1678 |
+
},
|
| 1679 |
+
{
|
| 1680 |
+
"type": "text",
|
| 1681 |
+
"text": "In this section, we investigate the contribution of skeletal feature vectors and affective feature vectors to the performance of our dance motion encoder through an ablation test.",
|
| 1682 |
+
"bbox": [
|
| 1683 |
+
112,
|
| 1684 |
+
758,
|
| 1685 |
+
487,
|
| 1686 |
+
822
|
| 1687 |
+
],
|
| 1688 |
+
"page_idx": 10
|
| 1689 |
+
},
|
| 1690 |
+
{
|
| 1691 |
+
"type": "text",
|
| 1692 |
+
"text": "For this experiment, we retrained the dance motion encoder using only skeletal feature vectors and, separately, using only affective feature vectors. The training parameters and configurations were kept identical to those used in the proposed method. Additionally, the optimal codebook sizes",
|
| 1693 |
+
"bbox": [
|
| 1694 |
+
112,
|
| 1695 |
+
825,
|
| 1696 |
+
489,
|
| 1697 |
+
921
|
| 1698 |
+
],
|
| 1699 |
+
"page_idx": 10
|
| 1700 |
+
},
|
| 1701 |
+
{
|
| 1702 |
+
"type": "text",
|
| 1703 |
+
"text": "were determined separately for the W2D and S2D methods.",
|
| 1704 |
+
"bbox": [
|
| 1705 |
+
507,
|
| 1706 |
+
517,
|
| 1707 |
+
880,
|
| 1708 |
+
546
|
| 1709 |
+
],
|
| 1710 |
+
"page_idx": 10
|
| 1711 |
+
},
|
| 1712 |
+
{
|
| 1713 |
+
"type": "text",
|
| 1714 |
+
"text": "The results are shown in Table 3. The table shows that the model using both skeletal and affective feature vectors achieves better retrieval performance than models using either type of feature vector alone. This improvement is statistically significant, with p-values below 0.05 as determined by t-tests. These results suggest that the combination of skeletal and affective feature vectors effectively contributes to the lyrics-to-dance motion retrieval task.",
|
| 1715 |
+
"bbox": [
|
| 1716 |
+
507,
|
| 1717 |
+
551,
|
| 1718 |
+
882,
|
| 1719 |
+
709
|
| 1720 |
+
],
|
| 1721 |
+
"page_idx": 10
|
| 1722 |
+
},
|
| 1723 |
+
{
|
| 1724 |
+
"type": "text",
|
| 1725 |
+
"text": "D Additional Examples of Lyric-Dance Relationships",
|
| 1726 |
+
"text_level": 1,
|
| 1727 |
+
"bbox": [
|
| 1728 |
+
507,
|
| 1729 |
+
728,
|
| 1730 |
+
865,
|
| 1731 |
+
762
|
| 1732 |
+
],
|
| 1733 |
+
"page_idx": 10
|
| 1734 |
+
},
|
| 1735 |
+
{
|
| 1736 |
+
"type": "text",
|
| 1737 |
+
"text": "Figures 5, 6, 7, and 8 present additional examples of the relationships between lyrics and dance motions. Figures 5 and 6 present the relationships between lyric words and dance motions, while Figures 7 and 8 present the relationships between entire lyric sentences and dance motions. Table 4 lists the song titles and lyricists presented in Figures 3, 5, 6, 7, and 8.",
|
| 1738 |
+
"bbox": [
|
| 1739 |
+
507,
|
| 1740 |
+
774,
|
| 1741 |
+
884,
|
| 1742 |
+
902
|
| 1743 |
+
],
|
| 1744 |
+
"page_idx": 10
|
| 1745 |
+
},
|
| 1746 |
+
{
|
| 1747 |
+
"type": "text",
|
| 1748 |
+
"text": "In Figure 5, several relationships can be ob",
|
| 1749 |
+
"bbox": [
|
| 1750 |
+
526,
|
| 1751 |
+
904,
|
| 1752 |
+
882,
|
| 1753 |
+
921
|
| 1754 |
+
],
|
| 1755 |
+
"page_idx": 10
|
| 1756 |
+
},
|
| 1757 |
+
{
|
| 1758 |
+
"type": "page_number",
|
| 1759 |
+
"text": "7911",
|
| 1760 |
+
"bbox": [
|
| 1761 |
+
480,
|
| 1762 |
+
927,
|
| 1763 |
+
519,
|
| 1764 |
+
940
|
| 1765 |
+
],
|
| 1766 |
+
"page_idx": 10
|
| 1767 |
+
},
|
| 1768 |
+
{
|
| 1769 |
+
"type": "image",
|
| 1770 |
+
"img_path": "images/006af55eabb758dd43b6066782a8545470ad53ff0308cc64f93d345066f8e486.jpg",
|
| 1771 |
+
"image_caption": [
|
| 1772 |
+
"Frame 1",
|
| 1773 |
+
"Frame T",
|
| 1774 |
+
"Figure 5: Examples of lyric and dance motion relationships with positive NPMI (Part 1/4)."
|
| 1775 |
+
],
|
| 1776 |
+
"image_footnote": [],
|
| 1777 |
+
"bbox": [
|
| 1778 |
+
124,
|
| 1779 |
+
105,
|
| 1780 |
+
863,
|
| 1781 |
+
890
|
| 1782 |
+
],
|
| 1783 |
+
"page_idx": 11
|
| 1784 |
+
},
|
| 1785 |
+
{
|
| 1786 |
+
"type": "page_number",
|
| 1787 |
+
"text": "7912",
|
| 1788 |
+
"bbox": [
|
| 1789 |
+
480,
|
| 1790 |
+
927,
|
| 1791 |
+
519,
|
| 1792 |
+
940
|
| 1793 |
+
],
|
| 1794 |
+
"page_idx": 11
|
| 1795 |
+
},
|
| 1796 |
+
{
|
| 1797 |
+
"type": "image",
|
| 1798 |
+
"img_path": "images/13a4ecb5b3eb1dd396b344da2f7f0606a8eaaf00eb81b011bf6db166b37fa28a.jpg",
|
| 1799 |
+
"image_caption": [
|
| 1800 |
+
"Frame 1",
|
| 1801 |
+
"Frame T",
|
| 1802 |
+
"Figure 6: Examples of lyric and dance motion relationships with positive NPMI (Part 2/4)."
|
| 1803 |
+
],
|
| 1804 |
+
"image_footnote": [],
|
| 1805 |
+
"bbox": [
|
| 1806 |
+
115,
|
| 1807 |
+
104,
|
| 1808 |
+
868,
|
| 1809 |
+
875
|
| 1810 |
+
],
|
| 1811 |
+
"page_idx": 12
|
| 1812 |
+
},
|
| 1813 |
+
{
|
| 1814 |
+
"type": "page_number",
|
| 1815 |
+
"text": "7913",
|
| 1816 |
+
"bbox": [
|
| 1817 |
+
480,
|
| 1818 |
+
928,
|
| 1819 |
+
519,
|
| 1820 |
+
940
|
| 1821 |
+
],
|
| 1822 |
+
"page_idx": 12
|
| 1823 |
+
},
|
| 1824 |
+
{
|
| 1825 |
+
"type": "image",
|
| 1826 |
+
"img_path": "images/df547c1295c72a17fd34aafa1722dc9fe2ee8caf2e0037e0276d2df7f36437fb.jpg",
|
| 1827 |
+
"image_caption": [
|
| 1828 |
+
"Frame 1",
|
| 1829 |
+
"Frame T",
|
| 1830 |
+
"Figure 7: Examples of lyric and dance motion relationships with positive NPMI (Part 3/4)."
|
| 1831 |
+
],
|
| 1832 |
+
"image_footnote": [],
|
| 1833 |
+
"bbox": [
|
| 1834 |
+
139,
|
| 1835 |
+
102,
|
| 1836 |
+
847,
|
| 1837 |
+
890
|
| 1838 |
+
],
|
| 1839 |
+
"page_idx": 13
|
| 1840 |
+
},
|
| 1841 |
+
{
|
| 1842 |
+
"type": "page_number",
|
| 1843 |
+
"text": "7914",
|
| 1844 |
+
"bbox": [
|
| 1845 |
+
480,
|
| 1846 |
+
928,
|
| 1847 |
+
519,
|
| 1848 |
+
940
|
| 1849 |
+
],
|
| 1850 |
+
"page_idx": 13
|
| 1851 |
+
},
|
| 1852 |
+
{
|
| 1853 |
+
"type": "image",
|
| 1854 |
+
"img_path": "images/5817812f3ffe717e694c00e17e6b22adb91e8515d3dddcd3a07c7f065ae0cc9e.jpg",
|
| 1855 |
+
"image_caption": [
|
| 1856 |
+
"Frame 1",
|
| 1857 |
+
"Frame T",
|
| 1858 |
+
"Figure 8: Examples of lyric and dance motion relationships with positive NPMI (Part 4/4)."
|
| 1859 |
+
],
|
| 1860 |
+
"image_footnote": [],
|
| 1861 |
+
"bbox": [
|
| 1862 |
+
137,
|
| 1863 |
+
102,
|
| 1864 |
+
848,
|
| 1865 |
+
883
|
| 1866 |
+
],
|
| 1867 |
+
"page_idx": 14
|
| 1868 |
+
},
|
| 1869 |
+
{
|
| 1870 |
+
"type": "page_number",
|
| 1871 |
+
"text": "7915",
|
| 1872 |
+
"bbox": [
|
| 1873 |
+
480,
|
| 1874 |
+
928,
|
| 1875 |
+
519,
|
| 1876 |
+
940
|
| 1877 |
+
],
|
| 1878 |
+
"page_idx": 14
|
| 1879 |
+
},
|
| 1880 |
+
{
|
| 1881 |
+
"type": "text",
|
| 1882 |
+
"text": "served. Dance motions 3 and 4 correspond to numbers such as “2” and “3,” where dancers use their fingers to count. Dance motions 5 and 6 show that words related to photography, such as “photography” and “toy camera,” are associated with framing gestures using fingers. Dance motions 7 and 8 reveal that time-related words like “seconds” and “clock” correspond to motions where the arms mimic the hands of a clock. Dance motions 9 and 10 demonstrate that animal sounds such as “Wawooo!” and “Roar!” correspond to hand gestures resembling animal claws.",
|
| 1883 |
+
"bbox": [
|
| 1884 |
+
112,
|
| 1885 |
+
84,
|
| 1886 |
+
489,
|
| 1887 |
+
275
|
| 1888 |
+
],
|
| 1889 |
+
"page_idx": 15
|
| 1890 |
+
},
|
| 1891 |
+
{
|
| 1892 |
+
"type": "text",
|
| 1893 |
+
"text": "In Figure 6, when words such as \"heart\" or \"chest\" are sung, the corresponding dance motions 11 and 12 involve placing a hand on the chest. Dance motions 13 and 14 correspond to the word \"promise,\" where the dancer raises their pinky finger, a gesture that in some cultures symbolizes a promise. Dance motions 15 and 16 correspond to the word \"you,\" where the dancer extends an arm forward, pointing toward the audience. Dance motions 17 and 18 correspond to the word \"money,\" where the dancer's finger form a circular shape representing a coin. These examples demonstrate how specific words in lyrics influence corresponding dance motions.",
|
| 1894 |
+
"bbox": [
|
| 1895 |
+
110,
|
| 1896 |
+
278,
|
| 1897 |
+
489,
|
| 1898 |
+
500
|
| 1899 |
+
],
|
| 1900 |
+
"page_idx": 15
|
| 1901 |
+
},
|
| 1902 |
+
{
|
| 1903 |
+
"type": "text",
|
| 1904 |
+
"text": "Figure 7 illustrates examples of the relationship between lyric sentences and dance motions. Dance motions 19 and 20 correspond to sentences related to \"new world,\" where the dancer raises an index finger toward the sky, symbolizing the gesture of pointing to a new world. Dance motions 21 and 22 align with sentences expressing \"I meet you,\" where the dancer uses an index finger to point toward \"you.\" Dance motions 23 and 24 correspond to sentences about \"white snow,\" with the dancer making a motion that mimics catching falling snow with their hands. Dance motions 25 and 26 correspond to sentences about \"selfies,\" where the dancer makes a gesture of holding a camera while showing a peace sign, a common pose for taking photos in some cultures.",
|
| 1905 |
+
"bbox": [
|
| 1906 |
+
112,
|
| 1907 |
+
502,
|
| 1908 |
+
489,
|
| 1909 |
+
759
|
| 1910 |
+
],
|
| 1911 |
+
"page_idx": 15
|
| 1912 |
+
},
|
| 1913 |
+
{
|
| 1914 |
+
"type": "text",
|
| 1915 |
+
"text": "Figure 8 also presents examples of the relationship between lyric sentences and dance motions. Dance motions 27 and 28 correspond to negations using \"not,\" where the dancer crosses their arms to form an \"X,\" a gesture that signifies negation in some cultures. Dance motions 29 and 30 relate to sentences about losing something, with the dancer pointing to the ground with their index finger as if indicating a lost object. Dance motions 31 and 32 correspond to sentences containing the phrase \"I",
|
| 1916 |
+
"bbox": [
|
| 1917 |
+
112,
|
| 1918 |
+
760,
|
| 1919 |
+
489,
|
| 1920 |
+
921
|
| 1921 |
+
],
|
| 1922 |
+
"page_idx": 15
|
| 1923 |
+
},
|
| 1924 |
+
{
|
| 1925 |
+
"type": "table",
|
| 1926 |
+
"img_path": "images/e1446c34157c9cf3436ab0e1bedb96728f8bf4133c634946015c575468d0ccd8.jpg",
|
| 1927 |
+
"table_caption": [
|
| 1928 |
+
"Table 4: Song titles and lyricists referenced in Figures 3, 5, 6, 7, and 8."
|
| 1929 |
+
],
|
| 1930 |
+
"table_footnote": [],
|
| 1931 |
+
"table_body": "<table><tr><td>Dance \nmotion</td><td>Song title / Lyricist</td></tr><tr><td>1</td><td>エイリアムエイリアム/NaムタMAN星人</td></tr><tr><td>2</td><td>極楽净土/MARiA</td></tr><tr><td>3</td><td>聖槍爆裂ボーリ/るり・もじや</td></tr><tr><td>4</td><td>奇跡さえも/Omoi</td></tr><tr><td>5</td><td>シビュラ/wotaku</td></tr><tr><td>6</td><td>GIFT/花束P</td></tr><tr><td>7</td><td>45秒/れすばん</td></tr><tr><td>8</td><td>Love Timer/emon(Tes.)</td></tr><tr><td>9</td><td>ルマ/いいロー</td></tr><tr><td>10</td><td>よう,thisだリバリバーケへ/大石昌良</td></tr><tr><td>11</td><td>トモダチ以上的えと・せら/曲者P</td></tr><tr><td>12</td><td>フロリネ/かしお。</td></tr><tr><td>13</td><td>Who? / Azari</td></tr><tr><td>14</td><td>晴天を穿て/傘村ト一夕</td></tr><tr><td>15</td><td>ダダダダ天使/NaムタMAN星人</td></tr><tr><td>16</td><td>愛言葉Ⅲ/DECO*27</td></tr><tr><td>17</td><td>キラメギラリ/yura</td></tr><tr><td>18</td><td>妄想税/DECO*27</td></tr><tr><td>19</td><td>Brand New World/三日月美嘉</td></tr><tr><td>20</td><td>Melody Line/SmileR</td></tr><tr><td>21</td><td>リバーベス広ニバーベス/NaムタMAN星人</td></tr><tr><td>22</td><td>Stocking Filler/nuru</td></tr><tr><td>23</td><td>Stocking Filler/nuru</td></tr><tr><td>24</td><td>Snow Fairy Story/40mP</td></tr><tr><td>25</td><td>ダ一只手ダ一只手/NaムタMAN星人</td></tr><tr><td>26</td><td>少女溶解/砂粒</td></tr><tr><td>27</td><td>キヤット亚イメイク/奏音69</td></tr><tr><td>28</td><td>愛言葉IV/DECO*27</td></tr><tr><td>29</td><td>Fantastic Night/べ部分内容P</td></tr><tr><td>30</td><td>トリノコンテイ/40mP</td></tr><tr><td>31</td><td>ひんてていっくLuv/れをる</td></tr><tr><td>32</td><td>おじやま虫/DECO*27</td></tr><tr><td>33</td><td>金星のダUNS/NaムタMAN星人</td></tr><tr><td>34</td><td>チーニケーマケ利亚斯/TOKOTOKO</td></tr></table>",
|
| 1932 |
+
"bbox": [
|
| 1933 |
+
515,
|
| 1934 |
+
120,
|
| 1935 |
+
877,
|
| 1936 |
+
604
|
| 1937 |
+
],
|
| 1938 |
+
"page_idx": 15
|
| 1939 |
+
},
|
| 1940 |
+
{
|
| 1941 |
+
"type": "text",
|
| 1942 |
+
"text": "love,\" where the dancer touches both cheeks with their hands in a cute expression. Dance motions 33 and 34 align with sentences related to love, where the dancer forms a heart shape with their hands. These examples demonstrate that our proposed method can discover various lyric-dance motion relationships in a data-driven manner.",
|
| 1943 |
+
"bbox": [
|
| 1944 |
+
507,
|
| 1945 |
+
627,
|
| 1946 |
+
884,
|
| 1947 |
+
740
|
| 1948 |
+
],
|
| 1949 |
+
"page_idx": 15
|
| 1950 |
+
},
|
| 1951 |
+
{
|
| 1952 |
+
"type": "page_number",
|
| 1953 |
+
"text": "7916",
|
| 1954 |
+
"bbox": [
|
| 1955 |
+
480,
|
| 1956 |
+
928,
|
| 1957 |
+
521,
|
| 1958 |
+
940
|
| 1959 |
+
],
|
| 1960 |
+
"page_idx": 15
|
| 1961 |
+
}
|
| 1962 |
+
]
|
2025/A Data-Driven Method for Analyzing and Quantifying Lyrics-Dance Motion Relationships/1137dc0d-0b8d-4c1f-bfdb-069a25b57730_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2025/A Data-Driven Method for Analyzing and Quantifying Lyrics-Dance Motion Relationships/1137dc0d-0b8d-4c1f-bfdb-069a25b57730_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:c9f28aee201621689e0cc53fc3a04f8d289f476fa677568fb98c53e635dcc390
|
| 3 |
+
size 6171682
|
2025/A Data-Driven Method for Analyzing and Quantifying Lyrics-Dance Motion Relationships/full.md
ADDED
|
@@ -0,0 +1,344 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# A Data-Driven Method for Analyzing and Quantifying Lyrics-Dance Motion Relationships
|
| 2 |
+
|
| 3 |
+
Kento Watanabe and Masataka Goto
|
| 4 |
+
|
| 5 |
+
National Institute of Advanced Industrial Science and Technology (AIST)
|
| 6 |
+
|
| 7 |
+
{kento.watanabe, m.goto}@aist.go.jp
|
| 8 |
+
|
| 9 |
+
# Abstract
|
| 10 |
+
|
| 11 |
+
Dancing to music with lyrics is a popular form of expression. While it is generally accepted that there are relationships between lyrics and dance motions, previous studies have not explored these relationships. A major challenge is that the relationships between lyrics and dance motions are not constant throughout a song but are instead localized to specific parts. To address this challenge, we hypothesize that lyrics and dance motions that co-occur across multiple songs are related. Based on this hypothesis, we propose a novel data-driven method to detect the parts of songs where meaningful relationships between lyrics and dance motions exist. We use clustering to transform lyrics and dance motions into symbols, enabling the calculation of co-occurrence frequencies and detection of significant correlations. The effectiveness of our method is validated by a dataset of time-synchronized lyrics and dance motions, which showed high correlation values for emotionally salient lyrics such as "love", which is expressed in heart-shaped motions. Furthermore, using our relationship detection method, we propose a method for retrieving dance motions from lyrics that outperforms previous text-to-motion retrieval methods, which focus on prose and non-dance motions.
|
| 12 |
+
|
| 13 |
+
# 1 Introduction
|
| 14 |
+
|
| 15 |
+
Platforms like YouTube and TikTok have popularized dance videos, allowing amateur dancers to express their creativity, but choreographing these performances is challenging for those without professional training. Consequently, previous studies have focused on synthesizing or retrieving dance motions that match the rhythms and timbres of music (Zhu et al., 2024). However, choreographic design is influenced by both audio attributes and lyrics, as dancers often interpret lyrics literally, such as by making heart shapes to express "love". However, no empirical studies have quantitatively analyzed the relationships between lyrics
|
| 16 |
+
|
| 17 |
+
and dance, and no studies have specifically focused on retrieving dance motions associated with lyrics. Quantitative analysis of lyrics and dance motions offers two main contributions. Academically, this analysis provides clear evidence of how lyrics and dance motions are related. Practically, it enables the development of systems that suggest dance motions based on user-input lyrics, offering choreographic ideas to support dancers, particularly amateurs, in creating or enhancing their choreography.
|
| 18 |
+
|
| 19 |
+
Previous studies have quantified the relationship between prose and corresponding non-dance gestures (e.g., "A man walks in a quarter circle to the left") (Yu et al., 2024; Horie et al., 2023; Petrovich et al., 2023; Tevet et al., 2022). These methods assume that text and motion are analyzed in units of sentences and sequences, respectively, and that all text-motion pairs are inherently related. However, these methods fall short for analyzing interactions between lyrics and dance motions, because those interactions are often localized to specific parts of songs rather than uniformly distributed across the entire song. For example, functional words like "the" or "is" may not correspond to any dance motion, while emotionally charged words like "love" might correlate with specific motions. To analyze lyrics and dance motions, we need a method that can detect which frames<sup>1</sup> have motions related to lyrics.
|
| 20 |
+
|
| 21 |
+
This study proposes a novel data-driven method to analyze and quantify the relationship between lyrics and dance motions as illustrated in Figure 1. We hypothesize that lyrics and dance motions that co-occur in different songs are related. For example, if "jump" is sung while a jumping motion is performed in multiple songs, the co-occurrence suggests a meaningful relationship. To quantify these relationships, we first transform lyrics and
|
| 22 |
+
|
| 23 |
+

|
| 24 |
+
Figure 1: Overview of the proposed data-driven method.
|
| 25 |
+
|
| 26 |
+
dance motions of each frame into discrete symbols by using clustering methods. These symbols allow us to calculate their co-occurrence frequency. We then calculate Pointwise Mutual Information (PMI), a measure used in natural language processing (NLP) methods, and use it to evaluate these frequencies. By using PMI we find positive correlations for frequent co-occurrences and negative correlations for independent occurrences, thus detecting specific frames where lyrics and dance motions are related.
|
| 27 |
+
|
| 28 |
+
To investigate the effectiveness of our method, we have prepared a novel dataset of timesynchronized lyrics and dance motions, including the dancer's finger motions. Applying our method to this dataset, we have detected frames with high PMI values between lyrics and dance motions. For example, the lyric "love" corresponds to dance motions that form heart shapes with the hands, and the phrase "getting lost" is associated with motions that suggest searching for a destination. Our proposed method therefore makes it possible to quantify and analyze specific relationships between lyrics and dance motions, revealing valuable correlations.
|
| 29 |
+
|
| 30 |
+
In addition, we have developed a lyrics-to-dance motion retrieval method by utilizing our method to detect relationships between lyrics and dance motions. This method outperforms an existing text-to-motion retrieval method based on contrastive learning, which assumes that all text-motion pairs are inherently related. Unlike this general method that focuses on text and non-dance motions, our method specifically targets lyrics and dance motions that frequently co-occur across multiple songs, indicating a meaningful relationship. Our method over
|
| 31 |
+
|
| 32 |
+
comes the limitations observed in broader text-to-motion studies and improves retrieval performance, providing an effective solution in this area.
|
| 33 |
+
|
| 34 |
+
# 2 Related Work
|
| 35 |
+
|
| 36 |
+
# 2.1 Audio-to-Dance Motion Synthesis
|
| 37 |
+
|
| 38 |
+
Previous studies have focused on correlating dance motions with the rhythmic and timbral features of audio signals (Zhu et al., 2024). A notable trend in this area is the use of diffusion models to synthesize dance motions from music audio (Tseng et al., 2023; Dabral et al., 2023; Li et al., 2024; Zhang et al., 2024; Luo et al., 2024; Qi et al., 2023). These models represent dance motion as matrices defined by frame counts $T$ and joint parameters $J$ , and they use architectures such as U-Nets (Ronneberger et al., 2015) or Transformers (Vaswani et al., 2017) to reconstruct these matrices. While promising, synthesis based on diffusion models often results in long computation times and can produce motions plagued by unnatural artifacts such as jitter or sliding feet.
|
| 39 |
+
|
| 40 |
+
An alternative method involves motion graph-based synthesis (Chen et al., 2021; Au et al., 2022; Gao et al., 2022) structured into three phases: segmenting audio into musical bars, retrieving dance motions from a database for each segment, and ensuring natural transitions between sequences. This method, ideal for music with repetitive structures like verse-bridge-chorus, maintains consistency across similar musical sections.
|
| 41 |
+
|
| 42 |
+
Unlike these studies, we do not focus on synthesizing dance motions from audio signals. Instead, we present a novel method that quantifies the rela
|
| 43 |
+
|
| 44 |
+
tionship between lyrics and specific dance motions, offering a unique approach within the field of dance motion analysis. Our method enriches the understanding of how lyrics influence and correspond to dance, and it provides findings that could improve future methods of dance motion synthesis and retrieval by ensuring that dance motions are more closely aligned with lyrical content.
|
| 45 |
+
|
| 46 |
+
# 2.2 Text and Motion Relationships
|
| 47 |
+
|
| 48 |
+
The relationship between prose text descriptions and corresponding non-dance motion has attracted considerable interest. Recent advances have used contrastive learning to embed text and motion feature vectors in a shared vector space, effectively aligning related text and motion vectors to improve the accuracy of motion vector retrieval (Yu et al., 2024; Horie et al., 2023; Petrovich et al., 2023; Tevet et al., 2022). In addition, the integration of large language models has extended the capabilities of text-motion analysis (Jiang et al., 2023). A particularly innovative method involves the use of vector quantized-variational auto-encoders to transform motion data into codebooks, which are then treated as pseudowords. These pseudowords are integrated with text to train language models, supporting a range of applications from text-to-motion synthesis to motion description generation and predictive modeling.
|
| 49 |
+
|
| 50 |
+
However, many existing methods assume a uniform relationship across all text-motion pairs, an assumption that may not reflect the complexity of lyrics-dance interactions. Contrary to the commonly assumed uniformity, lyrics-dance relationships may appear selectively, becoming prominent only in particular contexts, such as when emotionally charged words like "love" inspire certain dance motions. Our study examines how lyrics influence dance in different situations, allowing us to detail the multifaceted nature of these interactions.
|
| 51 |
+
|
| 52 |
+
# 3 Time-SynchronizedLyrics and Dance Motion Pair Data
|
| 53 |
+
|
| 54 |
+
Our goal is to analyze and quantify the relationship between lyrics and dance motions. To achieve this, we need data where each frame of dance motion is associated with specific words or sentences from the corresponding lyrics.
|
| 55 |
+
|
| 56 |
+
# 3.1 Data Collection
|
| 57 |
+
|
| 58 |
+
We collected 1,000 dance motion datasets (totaling 55.3 hours) from the MikuMikuDance community,
|
| 59 |
+
|
| 60 |
+
where creators manually trace dance motions from dance videos on platforms such as YouTube and NicoNico or use motion capture technologies to create dance sequences. Our collection includes 979 traced motions and 21 captured motions. These dance motions correspond to 868 unique songs, as some songs have multiple associated dance motions.
|
| 61 |
+
|
| 62 |
+
We obtained the corresponding audio and lyrics for these songs from various online resources. Most of the lyrics are in Japanese, with some in English. To ensure synchronization between the dance motions and the audio, we manually aligned their start times and annotated the start and end times of each lyric sentence during audio playback. Word-level timing annotation, which is labor-intensive, was refined using an automatic synchronization method (Nakano and Goto, 2016) that allows precise alignment of individual words. As a result, each frame of the motion data, recorded at 30 frames per second, is associated with specific words and sentences. Frames without corresponding lyrics were assigned a padding token, represented as [PAD], to maintain sequence consistency. As these choreographies and lyrics are copyrighted, we do not plan to make the collected data publicly available. However, the code for training the proposed model and conducting the evaluation is available at https://github.com/KentoW/lyrics-and-dance.
|
| 63 |
+
|
| 64 |
+
# 3.2 Dance Motion Data Pre-processing
|
| 65 |
+
|
| 66 |
+
The human skeletal model in our dataset comprises 53 joints, each represented by global coordinates (x, y, z) and Euler angles (roll, pitch, yaw). See Appendix B for details on the structure of the human skeletal model. To avoid issues like gimbal lock, we convert these angles into a six-dimensional format using sine and cosine transformations: sin(roll), cos(roll), sin(pitch), cos(pitch), sin(yaw), and cos(yaw). Preliminary tests showed this transformation method to be more effective than using quaternions.
|
| 67 |
+
|
| 68 |
+
In this study, we define the unit of analysis for dance motions as bars for easier analysis. Using the downbeat tracking method (Böck et al., 2016), we segmented the dance motion and audio data into bars, discarding any bars shorter than one second. This resulted in a dataset of 119,691 bars: 92,723 with lyrics and 26,968 without.
|
| 69 |
+
|
| 70 |
+
To ensure uniform spatial positioning for consistent analysis, we adjusted the y axis to set the
|
| 71 |
+
|
| 72 |
+
minimum coordinate of the "toe" joints to zero and adjusted the x and z axes to align the human model's average position with the origin for each bar. We calculated velocities and accelerations for each joint based on its positional and angular data, including local xyz directional velocities and accelerations in six dimensions and point-to-point global velocities and accelerations in two dimensions. We derived the first and second derivatives of the six-dimensional angular parameters, integrating twelve additional dimensions. As a result, each joint frame contained 29-dimensional parameters, providing a detailed framework for comprehensive motion analysis. Finally, we normalized the 29-dimensional parameters to ensure that they ranged between $-1$ and $1$ .
|
| 73 |
+
|
| 74 |
+
# 4 Analyzing Lyrics and Dance Motion Relationships
|
| 75 |
+
|
| 76 |
+
In this section we present a method for detecting frames that demonstrate meaningful relationships between lyrics and dance motions within our time-synchronized data. Based on our hypothesis that the frequent co-occurrence of lyrics and dance motions across multiple songs indicates meaningful relationships, we use co-occurrence frequencies to quantitatively analyze these interactions.
|
| 77 |
+
|
| 78 |
+
To analyze the co-occurrence relationships at each frame, we utilize PMI, a metric used in NLP. PMI is calculated with the formula
|
| 79 |
+
|
| 80 |
+
$$
|
| 81 |
+
\operatorname {P M I} (x, y) = \log \frac {P (x , y)}{P (x) P (y)} \tag {1}
|
| 82 |
+
$$
|
| 83 |
+
|
| 84 |
+
where $x$ and $y$ respectively represent the lyrics and dance motions at a specific frame. $P(x, y)$ is the probability of their co-occurrence within the same frame, and $P(x)$ and $P(y)$ are the probabilities of observing $x$ and $y$ independently at any frame. High PMI values at a frame indicate a strong association between the lyrics and dance motions at that particular moment.
|
| 85 |
+
|
| 86 |
+
To apply PMI, we convert lyrics and dance motions into discrete symbols $x$ and $y$ for each frame. This involves transforming feature vectors, derived from deep learning models, into a form suitable for PMI analysis. Using a clustering method, we categorize similar motions and lyrical expressions into codebooks $x$ and $y$ , which serve as the basis for calculating PMI, enabling us to detect frames where lyrics and dance motions are closely related.
|
| 87 |
+
|
| 88 |
+
The process begins with extracting feature vectors for lyrics and dance motions by using deep
|
| 89 |
+
|
| 90 |
+
learning models. We then implement clustering to group similar lyrics and dance motions, facilitating the PMI calculations for our analysis.
|
| 91 |
+
|
| 92 |
+
# 4.1 **Lyrics Feature Extraction**
|
| 93 |
+
|
| 94 |
+
Before clustering lyrics, we calculate feature vectors using a language model, either at the word level or the sentence level. This distinction is crucial, as the relationship between lyrics and dance motions can vary. For example, "jump" may correspond to a specific motion, while "I feel free like a bird in the sky" can inspire broader, fluid motions, capturing the overall feeling.
|
| 95 |
+
|
| 96 |
+
To explore these relationships, we use a pre-trained multilingual Sentence-BERT model $^2$ (Reimers and Gurevych, 2019) to generate both word-level and sentence-level feature vectors. This method allows us to analyze how individual words and broader thematic content influence corresponding dance motions.
|
| 97 |
+
|
| 98 |
+
# 4.2 Dance Motion Feature Extraction
|
| 99 |
+
|
| 100 |
+
We developed a novel dance motion encoder to extract features from our dataset, which includes detailed elements like finger joint configurations and variable sequence lengths. This encoder pre-processes both the physical and expressive components of dance, providing features for clustering.
|
| 101 |
+
|
| 102 |
+
The encoder processes two data types per frame: motion sequences and affective features. Each frame's motion data, denoted $S_{t}$ , is a matrix with dimensions $J \times P$ , where $J$ represents the number of joints and $P$ represents the number of parameters per joint. These matrices, $S_{1}, \ldots, S_{t}, \ldots, S_{T}$ , cover $T$ frames within a musical bar, capturing detailed motion across the sequence.
|
| 103 |
+
|
| 104 |
+
Additionally, we include affective features for each frame, denoted as $\pmb{a}_t$ . These vectors, derived from geometric properties like volume, area, length, and curvature between joints, represent the expressive qualities of dance. The sequence of these features across a bar is denoted as $\pmb{a}_1, \dots, \pmb{a}_t, \dots, \pmb{a}_T$ . Affective features recognized in human motion analysis (Kleinsmith and Bianchi-Berthouze, 2013; Crenn et al., 2016; Bhattacharya et al., 2020a,b, 2021) help capture the expressive aspects of dance, enriching our encoder's data input. By combining motion matrices and affective features, our encoder processes inputs that reflect both the physical execution and expressive dynamics of dance.
|
| 105 |
+
|
| 106 |
+

|
| 107 |
+
Figure 2: Overview of the proposed dance motion encoder and auto-encoder.
|
| 108 |
+
|
| 109 |
+
# 4.2.1 Affective Feature
|
| 110 |
+
|
| 111 |
+
We use 40 affective features, including the area of triangles formed by major joints, volume, distances between key joints, and limb curvature. These features provide a multidimensional view of the dancer's expressive state, revealing various emotional states. For example, a smaller area may suggest a reserved or tense posture, while a larger area implies openness and relaxation. See Appendix B for more details about the 40 affective features.
|
| 112 |
+
|
| 113 |
+
For each frame of motion data, we calculate these features along with their velocity and acceleration, forming a 120-dimensional affective feature vector. This method captures both the static posture and the dynamics of movement, which are crucial for understanding the fluidity and intensity of dance motions. We apply min-max normalization to these vectors, ensuring all features range from 0 to 1. This standardization allows for consistent and meaningful comparisons across different dance sequences.
|
| 114 |
+
|
| 115 |
+
# 4.2.2 Dance Motion Encoder Architecture
|
| 116 |
+
|
| 117 |
+
Our dance motion encoder consists of a skeletal encoder and a temporal encoder, as shown on the left side of Figure 2. The process starts with skeletal data for each frame $S_{t}$ . Each $P$ -dimensional joint vector in $S_{t}$ is transformed by
|
| 118 |
+
|
| 119 |
+
a specific Multi-Layer Perceptron (MLP) for each joint. These transformed vectors are fed into a Graph Transformer, which uses Laplacian positional embeddings to maintain the relative positions of the joints (Rao and Miao, 2023). The output is a $J \times D$ -dimensional vector for the joints, which is flattened and compressed through a linear layer to produce a single $D$ -dimensional vector representing the skeletal features for each frame, denoted as $\boldsymbol{u}_t$ . This vector captures the comprehensive skeletal structure of the dance motions at each frame.
|
| 120 |
+
|
| 121 |
+
Simultaneously, affective features $(\pmb{a}_t)$ are calculated for each frame, capturing emotional dynamics through metrics like joint areas and volumes. Before inputting into the temporal encoder, skeletal feature vectors $(\pmb{u}_t)$ and affective features $(\pmb{a}_t)$ are concatenated, forming a combined $D + 120$ dimensional vector for each frame. This vector is then compressed through an MLP to ensure uniform $D$ -dimensional vector consistency across all frames. The compressed vectors are fed into a Transformer encoder without positional embeddings (Haviv et al., 2022), emphasizing intrinsic interactions across frames rather than chronological order. This method generates a sequence of motion feature vectors $\left(\left[\pmb{v}_{1},\dots,\pmb{v}_{t},\dots,\pmb{v}_{T}\right]\right)$ , which comprehensively represent both the skeletal structure and emotional dynamics of the dance.
|
| 122 |
+
|
| 123 |
+
To train our dance motion encoder, we use an auto-encoder method suitable for our dataset that lacks gold labels. The dance motion decoder, shown on the right side of Figure 2, is designed as a reverse architecture of the encoder. Both the encoder and the decoder are trained by minimizing the Mean Squared Error (MSE) between the inputs and reconstructed outputs. The training process reconstructs skeletal matrices $\hat{S}_t$ with tanh activation and affective features $\hat{\boldsymbol{a}}_t$ with sigmoid activation, ensuring the output matches the input:
|
| 124 |
+
|
| 125 |
+
$$
|
| 126 |
+
\begin{array}{l} L o s s = \sum_ {t = 1} ^ {T} \left(\operatorname {M S E} \left(S _ {t}, \tanh \left(\hat {S} _ {t}\right)\right) \right. \tag {2} \\ \left. + \operatorname {M S E} \left(\boldsymbol {a} _ {t}, \operatorname {s i g m o i d} \left(\hat {\boldsymbol {a}} _ {t}\right)\right)\right). \\ \end{array}
|
| 127 |
+
$$
|
| 128 |
+
|
| 129 |
+
We train our auto-encoder on the full dataset of 119,691 bars using the AdamW optimizer (Loshchilov and Hutter, 2019), with a mini-batch size of 8 over 200 epochs. Our skeletal encoder processes data from 53 joints ( $J = 53$ ) and 29 motion parameters per joint ( $P = 29$ ). The encoder and decoder, with a dimensionality of $D = 256$ , use 4 multi-heads and 2 layers for skeletal processing, and 8 multi-heads and 4 layers for temporal processing.
|
| 130 |
+
|
| 131 |
+
# 4.3 *Lyrics and Dance Motion Clustering*
|
| 132 |
+
|
| 133 |
+
To confirm that lyrics and dance motions that co-occur in multiple songs indicate a meaningful relationship, it is crucial to preprocess the data before applying k-means clustering. This is because repetitive elements within a single song could skew our analysis. For example, if a word like "flower" is associated with a particular dance motion multiple times within only a single song, their association does not necessarily indicate a meaningful relationship. If the word and motion co-occur in multiple songs, however, their co-occurrence is likely to be meaningful.
|
| 134 |
+
|
| 135 |
+
To avoid clustering bias caused by duplicate entries within our dataset, we preprocess both lyrics and dance motions before clustering. Specifically, we identify and unify duplicate sentences within the lyrics to ensure that each unique sentence is represented only once in the clustering process. This prevents the formation of clusters dominated by repeated sentences. Similarly, for dance motions, we treat vectors with a cosine similarity of 0.99 or higher as duplicates and unify them. These preprocessing steps enable more accurate clustering by ensuring both lyrics and dance motions are
|
| 136 |
+
|
| 137 |
+
represented by distinct, non-redundant data points. With the data cleaned, we transform word vectors into the codebook $x_{w}$ and sentence vectors into the codebook $x_{s}$ for each frame. Similarly, dance motion vectors for each frame are transformed into the codebook $y$ .
|
| 138 |
+
|
| 139 |
+
# 4.4 PMI Calculation
|
| 140 |
+
|
| 141 |
+
To calculate PMI, we adjust the standard approach to fit our hypothesis that lyrics and dance motions that co-occur in multiple songs are indeed related. We calculate the probabilities based on the number of songs in which lyrics and dance motions co-occur by using the following formulas:
|
| 142 |
+
|
| 143 |
+
$$
|
| 144 |
+
P (x, y) = \frac {\# (x , y)}{\text {t o t a l n u m b e r o f s o n g s}}, \tag {3}
|
| 145 |
+
$$
|
| 146 |
+
|
| 147 |
+
$$
|
| 148 |
+
P (x) = \frac {\# (x)}{\text {t o t a l n u m b e r o f s o n g s}}, \tag {4}
|
| 149 |
+
$$
|
| 150 |
+
|
| 151 |
+
$$
|
| 152 |
+
P (y) = \frac {\# (y)}{\text {t o t a l n u m b e r o f s o n g s}}. \tag {5}
|
| 153 |
+
$$
|
| 154 |
+
|
| 155 |
+
Here $\# (x,y)$ indicates the number of songs where $x$ and $y$ co-occur within the same frame, while $\# (x)$ and $\# (y)$ indicate the numbers of songs where $x$ and $y$ appear independently. Equations (3), (4) and (5) are then used to calculate PMI as defined in Equation (1), emphasizing the importance of frequent co-occurrences in different songs. Additionally, to avoid bias from single occurrences $(\# (x,y) = 1)$ , which can misleadingly suggest strong relationships, we set PMI for these cases to zero. For clearer interpretation, we use Normalized PMI (NPMI) by normalizing PMI values to a scale between $-1$ and $1$ .
|
| 156 |
+
|
| 157 |
+
# 4.5 Analysis Setup
|
| 158 |
+
|
| 159 |
+
To comprehensively examine specific correlations between lyrics and dance motions across our entire dataset, we utilize high NPMI values in all lyric-motion pairs without dividing the dataset into training and test sets. We analyze these pairs using two types of lyric codebooks, $x_w$ for word-level and $x_s$ for sentence-level lyrics, alongside one codebook, $y$ , for dance motions. To obtain each codebook, we applied k-means clustering to each set of lyric or motion vectors across varying codebook sizes from 500 to 7000, in increments of 500, to explore different levels of granularity and find the most effective categorization for capturing nuanced relationships.
|
| 160 |
+
|
| 161 |
+
For each codebook size combination, we calculated NPMI values between all cluster pairs, one
|
| 162 |
+
|
| 163 |
+
Table 1: Numbers and percentages of songs, bars, and frames with positive NPMI values.
|
| 164 |
+
|
| 165 |
+
<table><tr><td>Entity</td><td>In the case of NPMI(xw,y)</td><td>In the case of NPMI(xs,y)</td></tr><tr><td rowspan="2">Number of songs</td><td>781 songs</td><td>980 songs</td></tr><tr><td>(78.10%)</td><td>(98.00%)</td></tr><tr><td rowspan="2">Number of bars</td><td>2,671 bars</td><td>16,161 bars</td></tr><tr><td>(2.88%)</td><td>(17.43%)</td></tr><tr><td rowspan="2">Number of frames</td><td>14,431 frames</td><td>91,593 frames</td></tr><tr><td>(0.03%)</td><td>(1.99%)</td></tr></table>
|
| 166 |
+
|
| 167 |
+
from the motion codebook and one from the lyrics codebook. This helped us detect the optimal granularity that maximizes the meaningful mutual information. Specifically, our highest NPMI values were 0.82 for word-level analysis (with a codebook size of 5000 for lyrics and 4500 for motions) and 0.93 for sentence-level analysis (with a codebook size of 6500 for both lyrics and motions). We use these codebook sizes for the following analysis.
|
| 168 |
+
|
| 169 |
+
# 4.6 Analysis Results
|
| 170 |
+
|
| 171 |
+
Table 1 shows that while the majority of songs (78.1% for word-level analysis and 98.0% for sentence-level analysis) have positive NPMI values, indicating strong lyrics-motion relationships, the numbers of bars and frames with positive NPMI values are quite small. This suggests that meaningful interactions between lyrics and dance motions are localized to specific parts of songs, supporting our hypothesis that meaningful relationships, while present, are not uniformly distributed across songs.
|
| 172 |
+
|
| 173 |
+
The examples shown in Figure 3 were selected from musical bars with positive NPMI values to illustrate specific relationships as we interpreted them. For example, emotional expressions in lyrics, such as "love", are often translated into heart-shaped gestures in dance. Sentence-level correlations show clear patterns, such as "getting lost" with a peering motion, demonstrating how broader narrative elements within lyrics can influence dance motions. See Appendix D for other examples of interpretable relationships between lyrics and corresponding dance motions. This analysis confirms that our data-driven method can uncover intuitive relationships between lyrics and dance motions.
|
| 174 |
+
|
| 175 |
+
# 5Lyrics-to-Dance Motion Retrieval
|
| 176 |
+
|
| 177 |
+
We developed a method that allows input of a single musical bar of lyrics to retrieve the corresponding bar of dance motions. The input is a sequence of lyric words within a musical bar. The retrieved
|
| 178 |
+
|
| 179 |
+
output is a ranked list of musical bars containing the corresponding motions.
|
| 180 |
+
|
| 181 |
+
For the retrieval task, we use Dynamic Time Warping (DTW) (Berndt and Clifford, 1994) to measure the similarity between the input lyrics and available dance motions. DTW is ideal for handling time series data of varying lengths and for capturing partial frame similarities, thereby accounting for how lyrics and dance motions relate within a bar.
|
| 182 |
+
|
| 183 |
+
In our DTW implementation, we derive the cost matrix from the Normalized Pointwise Mutual Information (NPMI) between the lyric and dance motion codebooks. The substitution (match) cost between elements $x$ and $y$ is defined as
|
| 184 |
+
|
| 185 |
+
$$
|
| 186 |
+
\operatorname {c o s t} (x, y) = 1 - \left(1 + \operatorname {N P M I} (x, y)\right) / 2. \tag {6}
|
| 187 |
+
$$
|
| 188 |
+
|
| 189 |
+
This formula assigns lower costs when there is a stronger relationship between $x$ and $y$ (i.e., higher NPMI values) and assigns higher costs when the relationship is weaker. For insertions and deletions, we assign a fixed cost of 1. This means that when aligning sequences, inserting or deleting an element incurs a constant penalty, regardless of the specific elements involved. This use of a fixed cost was based on preliminary experiments showing that using variable costs degraded performance in the retrieval task.
|
| 190 |
+
|
| 191 |
+
We implemented two retrieval strategies: word-to-dance motion retrieval (W2D) based on the word-to-motion $\mathrm{NPMI}(x_w, y)$ and sentence-to-dance motion retrieval (S2D) based on the sentence-to-motion $\mathrm{NPMI}(x_s, y)$ .
|
| 192 |
+
|
| 193 |
+
In this section, our dataset comprised 868 songs and was divided into 78,875 bars (85%) for training, 5,237 bars (5%) for development, and 8,611 bars (10%) for testing. Unlike Section 4 where models were trained on the full dataset, here we specifically trained the motion encoder and decoder, and conducted k-means clustering from scratch using only the training subset. In training the motion encoder, we utilized early stopping based on the development subset, ensuring precision in model tuning. We confined k-means clustering and NPMI calculations strictly to the training data, guaranteeing that these models were accurately calibrated for the specific tasks described in this section. Additionally, we adjusted the codebook sizes for k-means clustering, with optimal sizes of 6000 for lyrics and 7000 for motions in the W2D method, and 3500 for lyrics and 7000 for motions in the S2D method.
|
| 194 |
+
|
| 195 |
+
As there are no existing methods specifically designed for lyrics-to-dance motion retrieval, to eval
|
| 196 |
+
|
| 197 |
+

|
| 198 |
+
|
| 199 |
+

|
| 200 |
+
Figure 3: Examples of lyric and dance motion relationships from bars with positive NPMI values. The two examples include one that shows word-level correlations and one that shows sentence-level correlations. Each example pairs a synchronized lyric translated into English (with the original Japanese in parentheses) with its corresponding dance motion. The term "FUNC" indicates Japanese functional words that defy easy translation. NPMI values are indicated by visual cues: negative values are indicated by a minus sign in an orange box, positive values by a plus sign in a blue box, and zero values by 0.
|
| 201 |
+
|
| 202 |
+
Table 2:Lyrics-to-dance motion retrieval comparison.
|
| 203 |
+
|
| 204 |
+
<table><tr><td>Method</td><td>MRR↑</td><td>1/MRR↓</td></tr><tr><td>Random</td><td>0.00113</td><td>884</td></tr><tr><td>Contrastive learning</td><td>0.00151</td><td>663</td></tr><tr><td>Proposed method (W2D)</td><td>0.01905</td><td>53</td></tr><tr><td>Proposed method (S2D)</td><td>0.01837</td><td>54</td></tr></table>
|
| 205 |
+
|
| 206 |
+
uate our lyrics-to-dance motion retrieval method we devised a baseline using contrastive learning techniques adapted from text-to-motion retrieval studies (Yu et al., 2024; Horie et al., 2023; Petrovich et al., 2023; Tevet et al., 2022). Additionally, we included a basic random selection method that randomly selects dance motions from the test set. For the contrastive learning method, sentence vectors from a bar, generated by pre-trained SentenceBERT, were first averaged into a 384-dimensional vector and then compressed to a 256-dimensional vector using an MLP. Dance motion feature vectors from our pre-trained motion encoder were similarly processed into a sequence of 256-dimensional vectors by an MLP. Those vectors were then averaged into a single 256-dimensional vector via mean pooling. We fixed the parameters for Sentence-BERT and the motion encoder, focusing training on the MLP parameters using a contrastive loss function to distinguish between matching and non-matching pairs. Training employed the AdamW optimizer with a mini-batch size of 16 over 200 epochs, with early stopping triggered after 10 epochs without
|
| 207 |
+
|
| 208 |
+
improvement in development loss.
|
| 209 |
+
|
| 210 |
+
The experimental results presented in Table 2 show that while the contrastive learning method slightly outperforms the random method, both of our proposed methods achieve improvements, with Mean Reciprocal Ranks (MRRs) of 0.019 for W2D and 0.018 for S2D. This performance indicates that for a total of 8,611 bars, W2D ranks the correct dance motions within the top 53 positions on average, while S2D ranks them within the top 54 positions. A statistical t-test between W2D and S2D results yields a p-value of 0.480, indicating no significant difference between the two methods. These results support our hypothesis that while not all lyrics and dance motions share inherent relationships, there are meaningful relationships in certain instances.
|
| 211 |
+
|
| 212 |
+
# 6 Conclusion and Future Work
|
| 213 |
+
|
| 214 |
+
This paper introduces a novel method for quantifying the relationship between lyrics and dance motions that uses co-occurrence frequency. Our method effectively detects where lyrics correlate with dance motions and was validated by identifying meaningful relationships, such as the association between "love" and heart-shaped motions, and outperforming a previous method in a lyrics-to-dance motion retrieval task.
|
| 215 |
+
|
| 216 |
+
The success of this method not only impacts fields such as lyrics information processing, dance
|
| 217 |
+
|
| 218 |
+
information processing, music information retrieval, and computer vision but also opens promising avenues for interdisciplinary studies and enhances the integration of text content into choreographic design. Future work will aim to integrate our method into existing audio-to-dance motion retrieval methods to improve their accuracy in matching dance motions with lyrics.
|
| 219 |
+
|
| 220 |
+
# 7 Limitations
|
| 221 |
+
|
| 222 |
+
First, the dataset we used to evaluate our method's performance contained predominantly Japanese songs and thus may not represent the global musical landscape. Although our approach is adaptable, its generalizability needs to be validated with different linguistic inputs in the future. The dance motions in our dataset were mainly in the style of Japanese popular music and lacked the diversity of dance styles such as breakdance or street dance. Nonetheless, our method can theoretically be applied to other styles.
|
| 223 |
+
|
| 224 |
+
Second, while our method successfully identifies many significant lyric-dance associations, not all relationships are easily interpretable. This limitation highlights the challenges of using purely data-driven approaches without additional contextual or cultural insights. Additionally, due to computational constraints, we used the multilingual Sentence-BERT for lyric analysis, forgoing more advanced large-scale language models that might enhance our method's performance. Our study paved the way for future research using such computationally intensive models.
|
| 225 |
+
|
| 226 |
+
# 8 Acknowledgments
|
| 227 |
+
|
| 228 |
+
This work was supported in part by JST CREST Grant Number JPMJCR20D4, Japan.
|
| 229 |
+
|
| 230 |
+
# References
|
| 231 |
+
|
| 232 |
+
Ho Yin Au, Jie Chen, Junkun Jiang, and Yike Guo. 2022. ChoreoGraph: Music-conditioned automatic dance choreography over a style and tempo consistent dynamic graph. In The 30th ACM International Conference on Multimedia, MM 2022, pages 3917-3925.
|
| 233 |
+
Donald J Berndt and James Clifford. 1994. Using dynamic time warping to find patterns in time series. In Proceedings of Workshop on Knowledge Discovery in Databases, pages 359-370.
|
| 234 |
+
Uttaran Bhattacharya, Trisha Mittal, Rohan Chandra, Tanmay Randhavane, Aniket Bera, and Dinesh
|
| 235 |
+
|
| 236 |
+
Manocha. 2020a. STEP: Spatial temporal graph convolutional networks for emotion perception from gaits. In Proceedings of the Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, pages 1342-1350.
|
| 237 |
+
Uttaran Bhattacharya, Nicholas Rewkowski, Abhishek Banerjee, Pooja Guhan, Aniket Bera, and Dinesh Manocha. 2021. Text2Gestures: A transformer-based network for generating emotive body gestures for virtual agents. In IEEE Virtual Reality and 3D User Interfaces, VR 2021, pages 160-169.
|
| 238 |
+
Uttaran Bhattacharya, Nicholas Rewkowski, Pooja Guhan, Niall L. Williams, Trisha Mittal, Aniket Bera, and Dinesh Manocha. 2020b. Generating emotive gaits for virtual agents using affect-based autoregression. In 2020 IEEE International Symposium on Mixed and Augmented Reality, ISMAR 2020, pages 24-35.
|
| 239 |
+
Sebastian Böck, Florian Krebs, and Gerhard Widmer. 2016. Joint beat and downbeat tracking with recurrent neural networks. In Proceedings of the 17th International Society for Music Information Retrieval Conference, ISMIR 2016, pages 255-261.
|
| 240 |
+
Kang Chen, Zhipeng Tan, Jin Lei, Song-Hai Zhang, Yuan-Chen Guo, Weidong Zhang, and Shi-Min Hu. 2021. ChoreoMaster: Choreography-oriented music-driven dance synthesis. ACM Trans. Graph., 40(4):145:1-145:13.
|
| 241 |
+
Arthur Crenn, Rizwan Ahmed Khan, Alexandre Meyer, and Säïda Bouakaz. 2016. Body expression recognition from animated 3D skeleton. In International Conference on 3D Imaging, IC3D 2016, pages 1-7.
|
| 242 |
+
Rishabh Dabral, Muhammad Hamza Mughal, Vladislav Golyanik, and Christian Theobalt. 2023. MoFusion: A framework for denoising-diffusion-based motion synthesis. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2023, pages 9760-9770.
|
| 243 |
+
Jibin Gao, Junfu Pu, Honglun Zhang, Ying Shan, and Wei-Shi Zheng. 2022. PC-Dance: Posture-controllable music-driven dance synthesis. In The 30th ACM International Conference on Multimedia, MM 2022, pages 1261-1269.
|
| 244 |
+
Adi Haviv, Ori Ram, Ofir Press, Peter Izsak, and Omer Levy. 2022. Transformer language models without positional encodings still learn positional information. In *Findings of the Association for Computational Linguistics: EMNLP* 2022, pages 1382-1390.
|
| 245 |
+
Junpei Horie, Wataru Noguchi, Hiroyuki Iizuka, and Masahito Yamamoto. 2023. Learning shared embedding representation of motion and text using contrastive learning. Artif. Life Robotics, 28(1):148-157.
|
| 246 |
+
Biao Jiang, Xin Chen, Wen Liu, Jingyi Yu, Gang Yu, and Tao Chen. 2023. MotionGPT: Human motion as a foreign language. In Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023.
|
| 247 |
+
|
| 248 |
+
Andrea Kleinsmith and Nadia Bianchi-Berthouze. 2013. Affective body expression perception and recognition: A survey. IEEE Trans. Affect. Comput., 4(1):15-33.
|
| 249 |
+
Ronghui Li, Yuxiang Zhang, Yachao Zhang, Hongwen Zhang, Jie Guo, Yan Zhang, Yebin Liu, and Xiu Li. 2024. Lodge: A coarse to fine diffusion network for long dance generation guided by the characteristic dance primitives. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2024, pages 1524-1534.
|
| 250 |
+
Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization. In Proceedings of the 7th International Conference on Learning Representations, ICLR 2019.
|
| 251 |
+
Zhenye Luo, Min Ren, Xuecai Hu, Yongzhen Huang, and Li Yao. 2024. POPDG: Popular 3D dance generation with PopDanceSet. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2024, pages 26984-26993.
|
| 252 |
+
Tomoyasu Nakano and Masataka Goto. 2016. LyricListPlayer: A consecutive-query-by-playback interface for retrieving similar word sequences from different song lyrics. In Proceedings of the 13th Sound and Music Computing Conference, SMC 2016, pages 344-349.
|
| 253 |
+
Mathis Petrovich, Michael J. Black, and Gül Varol. 2023. TMR: Text-to-motion retrieval using contrastive 3D human motion synthesis. In IEEE/CVF International Conference on Computer Vision, ICCV 2023, pages 9454-9463.
|
| 254 |
+
Qiaosong Qi, Le Zhuo, Aixi Zhang, Yue Liao, Fei Fang, Si Liu, and Shuicheng Yan. 2023. DiffDance: Cascaded human motion diffusion model for dance generation. In Proceedings of the 31st ACM International Conference on Multimedia, MM 2023, pages 1374-1382.
|
| 255 |
+
Haocong Rao and Chunyan Miao. 2023. TranSG: Transformer-based skeleton graph prototype contrastive learning with structure-trajectory prompted reconstruction for person re-identification. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2023, pages 22118-22128.
|
| 256 |
+
Nils Reimers and Iryna Gurevych. 2019. SentenceBERT: Sentence embeddings using siamese BERTnetworks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, pages 3980-3990.
|
| 257 |
+
Olaf Ronneberger, Philipp Fischer, and Thomas Brox. 2015. U-Net: Convolutional networks for biomedical image segmentation. In Medical Image Computing and Computer-Assisted Intervention, MICCAI 2015, volume 9351, pages 234-241.
|
| 258 |
+
|
| 259 |
+
Guy Tevet, Brian Gordon, Amir Hertz, Amit H. Bermano, and Daniel Cohen-Or. 2022. MotionCLIP: Exposing human motion generation to CLIP space. In European Conference on Computer Vision, ECCV 2022, volume 13682 of Lecture Notes in Computer Science, pages 358-374.
|
| 260 |
+
Jonathan Tseng, Rodrigo Castellon, and C. Karen Liu. 2023. EDGE: Editable dance generation from music. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2023, pages 448-458.
|
| 261 |
+
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, pages 5998-6008.
|
| 262 |
+
Qing Yu, Mikihiro Tanaka, and Kent Fujiwara. 2024. Exploring vision transformers for 3D human motion-language models with motion patches. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2024, pages 937-946.
|
| 263 |
+
Canyu Zhang, Youbao Tang, Ning Zhang, Ruei-Sung Lin, Mei Han, Jing Xiao, and Song Wang. 2024. Bidirectional autoregressive diffusion model for dance generation. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2024, pages 687-696.
|
| 264 |
+
Wentao Zhu, Xiaoxuan Ma, Dongwoo Ro, Hai Ci, Jinlu Zhang, Jiaxin Shi, Feng Gao, Qi Tian, and Yizhou Wang. 2024. Human motion generation: A survey. IEEE Trans. Pattern Anal. Mach. Intell., 46(4):2430-2449.
|
| 265 |
+
|
| 266 |
+
# A Implementation Details
|
| 267 |
+
|
| 268 |
+
The training of the dance motion encoder and decoder was implemented using PyTorch<sup>3</sup> and conducted on an NVIDIA V100 GPU. The total number of trainable parameters for the dance motion encoder-decoder is 5,858,221. For clustering, we employed the k-means algorithm implemented in scikit-learn<sup>4</sup>, and statistical significance testing was performed using the t-test function from scipy<sup>5</sup>. Due to computational resource constraints, all results presented in this paper are based on a single run of the experiments.
|
| 269 |
+
|
| 270 |
+
# B Human Skeletal Model and Affective Features
|
| 271 |
+
|
| 272 |
+
The left side of Figure 4 illustrates the human skeletal model with 53 joints, while the right side
|
| 273 |
+
|
| 274 |
+

|
| 275 |
+
Figure 4: Detailed specifications of human skeletal model and affective features.
|
| 276 |
+
|
| 277 |
+

|
| 278 |
+
|
| 279 |
+

|
| 280 |
+
|
| 281 |
+

|
| 282 |
+
|
| 283 |
+

|
| 284 |
+
|
| 285 |
+

|
| 286 |
+
|
| 287 |
+

|
| 288 |
+
|
| 289 |
+
Table 3: Retrieval performance with skeletal and affective feature vectors. The codebook size in the table is displayed as a pair of values, with the first representing the codebook size for lyrics and the second representing the codebook size for motions.
|
| 290 |
+
|
| 291 |
+
<table><tr><td>Retrieval strategy</td><td>Feature vector</td><td>Codebook size</td><td>MRR ↑</td><td>1/MRR ↓</td></tr><tr><td rowspan="3">Word-to-Dance motion (W2D)</td><td>Both feature vectors</td><td>6000, 7000</td><td>0.01905</td><td>53</td></tr><tr><td>Skeletal feature vectors</td><td>6000, 5000</td><td>0.01641</td><td>61</td></tr><tr><td>Affective feature vectors</td><td>6000, 5000</td><td>0.01757</td><td>57</td></tr><tr><td rowspan="3">Sentence-to-Dance motion (S2D)</td><td>Both feature vectors</td><td>3500, 7000</td><td>0.01837</td><td>54</td></tr><tr><td>Skeletal feature vectors</td><td>3000, 6500</td><td>0.01547</td><td>65</td></tr><tr><td>Affective feature vectors</td><td>5000, 4500</td><td>0.01475</td><td>68</td></tr></table>
|
| 292 |
+
|
| 293 |
+
displays 40 affective features designed to express emotional states through body language. These features include volume $(V)$ calculated from the distances between the extremal joint coordinates, length $(L)$ measured across ten different joint pairs, area $(A)$ derived from five types of triangles formed by triplets of joints, body joint curvature $(C)$ calculated from 14 different measurements, and finger joint curvature $(F)$ represented by ten different calculations that together quantify expressive body dynamics.
|
| 294 |
+
|
| 295 |
+
# C Effectiveness of Combining Skeletal and Affective Feature Vectors
|
| 296 |
+
|
| 297 |
+
In this section, we investigate the contribution of skeletal feature vectors and affective feature vectors to the performance of our dance motion encoder through an ablation test.
|
| 298 |
+
|
| 299 |
+
For this experiment, we retrained the dance motion encoder using only skeletal feature vectors and, separately, using only affective feature vectors. The training parameters and configurations were kept identical to those used in the proposed method. Additionally, the optimal codebook sizes
|
| 300 |
+
|
| 301 |
+
were determined separately for the W2D and S2D methods.
|
| 302 |
+
|
| 303 |
+
The results are shown in Table 3. The table shows that the model using both skeletal and affective feature vectors achieves better retrieval performance than models using either type of feature vector alone. This improvement is statistically significant, with p-values below 0.05 as determined by t-tests. These results suggest that the combination of skeletal and affective feature vectors effectively contributes to the lyrics-to-dance motion retrieval task.
|
| 304 |
+
|
| 305 |
+
# D Additional Examples of Lyric-Dance Relationships
|
| 306 |
+
|
| 307 |
+
Figures 5, 6, 7, and 8 present additional examples of the relationships between lyrics and dance motions. Figures 5 and 6 present the relationships between lyric words and dance motions, while Figures 7 and 8 present the relationships between entire lyric sentences and dance motions. Table 4 lists the song titles and lyricists presented in Figures 3, 5, 6, 7, and 8.
|
| 308 |
+
|
| 309 |
+
In Figure 5, several relationships can be ob
|
| 310 |
+
|
| 311 |
+

|
| 312 |
+
Frame 1
|
| 313 |
+
Frame T
|
| 314 |
+
Figure 5: Examples of lyric and dance motion relationships with positive NPMI (Part 1/4).
|
| 315 |
+
|
| 316 |
+

|
| 317 |
+
Frame 1
|
| 318 |
+
Frame T
|
| 319 |
+
Figure 6: Examples of lyric and dance motion relationships with positive NPMI (Part 2/4).
|
| 320 |
+
|
| 321 |
+

|
| 322 |
+
Frame 1
|
| 323 |
+
Frame T
|
| 324 |
+
Figure 7: Examples of lyric and dance motion relationships with positive NPMI (Part 3/4).
|
| 325 |
+
|
| 326 |
+

|
| 327 |
+
Frame 1
|
| 328 |
+
Frame T
|
| 329 |
+
Figure 8: Examples of lyric and dance motion relationships with positive NPMI (Part 4/4).
|
| 330 |
+
|
| 331 |
+
served. Dance motions 3 and 4 correspond to numbers such as “2” and “3,” where dancers use their fingers to count. Dance motions 5 and 6 show that words related to photography, such as “photography” and “toy camera,” are associated with framing gestures using fingers. Dance motions 7 and 8 reveal that time-related words like “seconds” and “clock” correspond to motions where the arms mimic the hands of a clock. Dance motions 9 and 10 demonstrate that animal sounds such as “Wawooo!” and “Roar!” correspond to hand gestures resembling animal claws.
|
| 332 |
+
|
| 333 |
+
In Figure 6, when words such as "heart" or "chest" are sung, the corresponding dance motions 11 and 12 involve placing a hand on the chest. Dance motions 13 and 14 correspond to the word "promise," where the dancer raises their pinky finger, a gesture that in some cultures symbolizes a promise. Dance motions 15 and 16 correspond to the word "you," where the dancer extends an arm forward, pointing toward the audience. Dance motions 17 and 18 correspond to the word "money," where the dancer's finger form a circular shape representing a coin. These examples demonstrate how specific words in lyrics influence corresponding dance motions.
|
| 334 |
+
|
| 335 |
+
Figure 7 illustrates examples of the relationship between lyric sentences and dance motions. Dance motions 19 and 20 correspond to sentences related to "new world," where the dancer raises an index finger toward the sky, symbolizing the gesture of pointing to a new world. Dance motions 21 and 22 align with sentences expressing "I meet you," where the dancer uses an index finger to point toward "you." Dance motions 23 and 24 correspond to sentences about "white snow," with the dancer making a motion that mimics catching falling snow with their hands. Dance motions 25 and 26 correspond to sentences about "selfies," where the dancer makes a gesture of holding a camera while showing a peace sign, a common pose for taking photos in some cultures.
|
| 336 |
+
|
| 337 |
+
Figure 8 also presents examples of the relationship between lyric sentences and dance motions. Dance motions 27 and 28 correspond to negations using "not," where the dancer crosses their arms to form an "X," a gesture that signifies negation in some cultures. Dance motions 29 and 30 relate to sentences about losing something, with the dancer pointing to the ground with their index finger as if indicating a lost object. Dance motions 31 and 32 correspond to sentences containing the phrase "I
|
| 338 |
+
|
| 339 |
+
Table 4: Song titles and lyricists referenced in Figures 3, 5, 6, 7, and 8.
|
| 340 |
+
|
| 341 |
+
<table><tr><td>Dance
|
| 342 |
+
motion</td><td>Song title / Lyricist</td></tr><tr><td>1</td><td>エイリアムエイリアム/NaムタMAN星人</td></tr><tr><td>2</td><td>極楽净土/MARiA</td></tr><tr><td>3</td><td>聖槍爆裂ボーリ/るり・もじや</td></tr><tr><td>4</td><td>奇跡さえも/Omoi</td></tr><tr><td>5</td><td>シビュラ/wotaku</td></tr><tr><td>6</td><td>GIFT/花束P</td></tr><tr><td>7</td><td>45秒/れすばん</td></tr><tr><td>8</td><td>Love Timer/emon(Tes.)</td></tr><tr><td>9</td><td>ルマ/いいロー</td></tr><tr><td>10</td><td>よう,thisだリバリバーケへ/大石昌良</td></tr><tr><td>11</td><td>トモダチ以上的えと・せら/曲者P</td></tr><tr><td>12</td><td>フロリネ/かしお。</td></tr><tr><td>13</td><td>Who? / Azari</td></tr><tr><td>14</td><td>晴天を穿て/傘村ト一夕</td></tr><tr><td>15</td><td>ダダダダ天使/NaムタMAN星人</td></tr><tr><td>16</td><td>愛言葉Ⅲ/DECO*27</td></tr><tr><td>17</td><td>キラメギラリ/yura</td></tr><tr><td>18</td><td>妄想税/DECO*27</td></tr><tr><td>19</td><td>Brand New World/三日月美嘉</td></tr><tr><td>20</td><td>Melody Line/SmileR</td></tr><tr><td>21</td><td>リバーベス広ニバーベス/NaムタMAN星人</td></tr><tr><td>22</td><td>Stocking Filler/nuru</td></tr><tr><td>23</td><td>Stocking Filler/nuru</td></tr><tr><td>24</td><td>Snow Fairy Story/40mP</td></tr><tr><td>25</td><td>ダ一只手ダ一只手/NaムタMAN星人</td></tr><tr><td>26</td><td>少女溶解/砂粒</td></tr><tr><td>27</td><td>キヤット亚イメイク/奏音69</td></tr><tr><td>28</td><td>愛言葉IV/DECO*27</td></tr><tr><td>29</td><td>Fantastic Night/べ部分内容P</td></tr><tr><td>30</td><td>トリノコンテイ/40mP</td></tr><tr><td>31</td><td>ひんてていっくLuv/れをる</td></tr><tr><td>32</td><td>おじやま虫/DECO*27</td></tr><tr><td>33</td><td>金星のダUNS/NaムタMAN星人</td></tr><tr><td>34</td><td>チーニケーマケ利亚斯/TOKOTOKO</td></tr></table>
|
| 343 |
+
|
| 344 |
+
love," where the dancer touches both cheeks with their hands in a cute expression. Dance motions 33 and 34 align with sentences related to love, where the dancer forms a heart shape with their hands. These examples demonstrate that our proposed method can discover various lyric-dance motion relationships in a data-driven manner.
|
2025/A Data-Driven Method for Analyzing and Quantifying Lyrics-Dance Motion Relationships/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:03458ac38a3dee3848d06eaad0ef551dc1f5617f02b86f26c9226271b382b328
|
| 3 |
+
size 1574856
|
2025/A Data-Driven Method for Analyzing and Quantifying Lyrics-Dance Motion Relationships/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2025/A Distributional Perspective on Word Learning in Neural Language Models/d2c23648-7b19-4afb-a9b9-168fd986a2f1_content_list.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2025/A Distributional Perspective on Word Learning in Neural Language Models/d2c23648-7b19-4afb-a9b9-168fd986a2f1_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2025/A Distributional Perspective on Word Learning in Neural Language Models/d2c23648-7b19-4afb-a9b9-168fd986a2f1_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:d0f3f5d4606ef9c3f140e08fb285a85cd2b85c313ef35b845c7aa6cd2f7d4847
|
| 3 |
+
size 10465099
|
2025/A Distributional Perspective on Word Learning in Neural Language Models/full.md
ADDED
|
@@ -0,0 +1,622 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# A Distributional Perspective on Word Learning in Neural Language Models
|
| 2 |
+
|
| 3 |
+
Filippo Ficarra $^{1}$ Ryan Cotterell $^{1}$ Alex Warstadt $^{1,2}$ $^{1}$ ETH Zürich $^{2}$ University of California San Diego
|
| 4 |
+
{fficarra,rcotterell}@ethz.ch awarstadt@ucsd.edu
|
| 5 |
+
|
| 6 |
+
# Abstract
|
| 7 |
+
|
| 8 |
+
Language models (LMs) are increasingly being studied as models of human language learners. Due to the nascency of the field, it is not well-established whether LMs exhibit similar learning dynamics to humans, and there are few direct comparisons between learning trajectories in humans and models. Word learning trajectories for children are relatively well-documented, and recent work has tried to extend these investigations to language models. However, there are no widely agreed-upon metrics for word learning in language models. We take a distributional approach to this problem, defining lexical knowledge in terms of properties of the learned distribution for a target word. We argue that distributional signatures studied in prior work fail to capture key distributional information. Thus, we propose an array of signatures that improve on earlier approaches by capturing knowledge of both where the target word can and cannot occur as well as gradient preferences about the word's appropriateness. We obtain learning trajectories for a selection of small language models we train from scratch, study the relationship between different distributional signatures, compare how well they align with human word learning trajectories and interpretable lexical features, and address basic methodological questions about estimating these distributional signatures. Our metrics largely capture complementary information, suggesting that it is important not to rely on a single metric. However, across all metrics, language models' learning trajectories fail to correlate with those of children.
|
| 9 |
+
|
| 10 |
+

|
| 11 |
+
|
| 12 |
+
FilippoFicarra/word_learning
|
| 13 |
+
|
| 14 |
+
# 1 Introduction
|
| 15 |
+
|
| 16 |
+
There is a long tradition of characterizing words in terms of their statistical properties (Wittgenstein, 1953). The distributional hypothesis (Harris, 1954; Lenci, 2008), which characterizes knowledge of a word in terms of "the company it keeps" (Firth, 1957), has proven surprisingly prescient. Indeed, such a characterization is the idea behind static
|
| 17 |
+
|
| 18 |
+
word representations (Deerwester et al., 1990; Landauer and Dumais, 1997; Hofmann, 1999; Mikolov et al., 2013; Pennington et al., 2014) estimated from data, as well as modern (large) language models (OpenAI, 2022; Meta, 2024). While such a distributional approach to training language models (LMs) is now well-established, only recently has distributional information been explored as a tool for evaluating lexical knowledge in LMs.
|
| 19 |
+
|
| 20 |
+
Over the last few years, there has been a growing interest in studying word learning in language models (Nikolaus and Fourtassi, 2021a; Chang and Bergen, 2022; Portelance et al., 2024, 2023; Vong et al., 2024; Zhuang et al., 2024b,a; Ma et al., 2024). Most of these studies are part of a larger research program to use LMs to inform the study of human language acquisition by serving as convenient, controllable, and effective models of human development (Dupoux, 2018; Linzen, 2019; Warstadt and Bowman, 2022; Constantinescu et al., 2024). From this perspective, it is desirable to have LMs with human-like learning trajectories, as they can better serve as generalizable models of human learners. Word learning has a potentially important role in the success of this research program because it is one of the best proving grounds for comparing the learning trajectories of humans and LMs head to head. While some studies (e.g., Choshen et al., 2022) have tracked syntax learning in LMs using benchmarks like BLiMP (Warstadt et al., 2020), corresponding data for children is more limited in scope (Evanson et al., 2023). There is also child data on phonological learning (Lavechin et al., 2022) which can be explored further as audio-based LMs improve.
|
| 21 |
+
|
| 22 |
+
Fortuitously, word learning trajectories in text-based LMs can be easily compared against a wealth of child data in multiple languages thanks to the massive efforts of caregivers and scholars who report and curate child word learning data in the Wordbank database (Frank et al., 2017). Unfortunately, the caregiver reporting approach (Fenson et al., 2013) used in Wordbank is not immediately applicable to LMs, and there is no consensus on
|
| 23 |
+
|
| 24 |
+
how to benchmark word learning in LMs. Zhuang et al. (2024b) explored word learning through different methods, including comparing LMs' word similarity scores to humans' (Finkelstein et al., 2001; Bruni et al., 2012; Hill et al., 2015; Gerz et al., 2016), classifying lexical entailment relations (Santus et al., 2016), predicting semantic features (Buchanan et al., 2019) and using minimal pairs to measure LM preferences for appropriate word usage (Marvin and Linzen, 2018). Other articles rely on visual stimuli to ground evaluations for multimodal models (Nikolaus and Fourtassi, 2021a; Berger et al., 2022; Vong et al., 2024). Notably, Chang and Bergen (2022) and Portelance et al. (2023) take a distributional approach, characterizing lexical knowledge in terms of the LM's surprisal, an information-theoretic quantity that has been widely studied in psycholinguistics (Hale, 2001; Levy, 2008).
|
| 25 |
+
|
| 26 |
+
In this study, we take inspiration from Chang and Bergen's (2022) approach to tracking the model's distributional knowledge about a particular word throughout training. We formalize their approach and improve on it in several respects. While Chang and Bergen only consider the surprisal under an LM of a word in a context where the target word is appropriate and (implicitly) rely on a trivial approximation of the ground truth distribution in evaluating the quality of lexical knowledge, in contrast, we propose a family of distributional signatures that allow for the consideration of the LM's learned distribution in both appropriate and inappropriate contexts. We also introduce distributional signatures that are truly intrinsic to the model itself, as well as strongly reference signatures that compare the learned distribution to a non-trivial ground truth, which we approximate using a large pretrained LM.
|
| 27 |
+
|
| 28 |
+
In our experiments, we train language models from scratch on three datasets resembling the input to children to varying degrees. We record the distributional signatures for a set of common words throughout training, and following Chang and Bergen, we apply a threshold to the measured learning trajectories to obtain an age-of-acquisition $(AoA)$ for each word. We then conduct analyses to answer the following questions:
|
| 29 |
+
|
| 30 |
+
1. Which methods allow us to reliably extract AoA scores?
|
| 31 |
+
2. How does the order of word acquisition in LMs compare to that of children?
|
| 32 |
+
|
| 33 |
+
3. What are the empirical properties of the learning trajectories for different distributional signatures?
|
| 34 |
+
|
| 35 |
+
We find that the learning trajectories for different distributional signatures are indeed different from each other, suggesting that earlier approaches failed to capture some aspects of word learning. While many signatures, like Chang and Bergen's (2022), give trajectories that are highly correlated with simple features like lexical frequency, other signatures are harder to predict and therefore may capture more nontrivial information. However, we find that learning trajectories for some distributional signatures fail to converge, making AoAs difficult to infer. Finally, no signature yields AoA scores that are strongly correlated with children's AoA, supporting the conclusion that with current methods, LMs' learning patterns are poorly aligned with humans' and underscoring a limitation of current LMs as models of human development. We, therefore, call for future work to evaluate and improve the human-likeness of LMs' learning trajectories using the distributional signatures we propose.
|
| 36 |
+
|
| 37 |
+
# 2 Preliminaries
|
| 38 |
+
|
| 39 |
+
Let $\Sigma$ be an alphabet, a finite, non-empty set of characters, e.g., Unicode symbols. A string is a finite sequence of characters drawn from an alphabet $\Sigma$ . The set $\Sigma^{*}$ , the Kleene closure of $\Sigma$ , is the set of all strings with characters drawn from $\Sigma$ —including the empty string $\varepsilon$ . We consider two distinguished types of strings. First, we define a word as a character string $w \in \Sigma^{*}$ , which is believed to operate as a lexical item. Second, we refer to an arbitrary character string that precedes a word as a context. We denote a context as $c \in \Sigma^{*}$ .
|
| 40 |
+
|
| 41 |
+
A language model $p$ is a probability distribution over $\Sigma^*$ . A language model's prefix probability is defined as the following sum
|
| 42 |
+
|
| 43 |
+
$$
|
| 44 |
+
\overrightarrow {p} (\boldsymbol {y}) \stackrel {\text {d e f}} {=} \sum_ {\boldsymbol {y} ^ {\prime} \in \Sigma^ {*}} p \left(\boldsymbol {y} \boldsymbol {y} ^ {\prime}\right). \tag {1}
|
| 45 |
+
$$
|
| 46 |
+
|
| 47 |
+
Throughout the paper, we will primarily be interested in a specific ratio of $p$ 's prefix probabilities, which we use to define the probability of a word in a context<sup>3</sup> as follows
|
| 48 |
+
|
| 49 |
+
$$
|
| 50 |
+
\overrightarrow {p} (\boldsymbol {w} \mid \boldsymbol {c}) \stackrel {\text {d e f}} {=} \frac {\overrightarrow {p} (\boldsymbol {c} \boldsymbol {w})}{\overrightarrow {p} (\boldsymbol {c})}. \tag {2}
|
| 51 |
+
$$
|
| 52 |
+
|
| 53 |
+
We are also interested in the surprisal of a word in a context, denoted $-\log \overrightarrow{p}(\boldsymbol{w} \mid \boldsymbol{c})$ .
|
| 54 |
+
|
| 55 |
+
Now, we derive a language model $p$ 's context distribution using Bayes' rule as follows
|
| 56 |
+
|
| 57 |
+
$$
|
| 58 |
+
\overrightarrow {p _ {\kappa}} (\boldsymbol {c} \mid \boldsymbol {w}) = \frac {\overrightarrow {p} (\boldsymbol {w} \mid \boldsymbol {c}) \overrightarrow {p} (\boldsymbol {c})}{\sum_ {\boldsymbol {c} \in \Sigma^ {*}} \overrightarrow {p} (\boldsymbol {w} \mid \boldsymbol {c}) \overrightarrow {p} (\boldsymbol {c})}. \tag {3}
|
| 59 |
+
$$
|
| 60 |
+
|
| 61 |
+
Under the assumption that $p$ is of finite expected length, then $\sum_{\boldsymbol{c} \in \Sigma^{*}} \overrightarrow{p}(\boldsymbol{w} \mid \boldsymbol{c}) \overrightarrow{p}(\boldsymbol{c})$ is always finite (Opedal et al., 2024, §2.1). In contrast to $\overrightarrow{p}(\boldsymbol{w} \mid \boldsymbol{c}), \overrightarrow{p_{\kappa}}(\boldsymbol{c} \mid \boldsymbol{w})$ is a distribution over $\Sigma^{*}$ due to the normalization present in Eq. (3). Complementarily, we define a word $\boldsymbol{w}$ 's negative context distribution as
|
| 62 |
+
|
| 63 |
+
$$
|
| 64 |
+
\overrightarrow {p _ {\kappa}} (\boldsymbol {c} \mid \neg \boldsymbol {w}) = \frac {(1 - \overrightarrow {p} (\boldsymbol {w} \mid \boldsymbol {c})) \overrightarrow {p} (\boldsymbol {c})}{\sum_ {\boldsymbol {c} \in \Sigma^ {*}} (1 - \overrightarrow {p} (\boldsymbol {w} \mid \boldsymbol {c})) \overrightarrow {p} (\boldsymbol {c})}. \tag {4}
|
| 65 |
+
$$
|
| 66 |
+
|
| 67 |
+
The probability $1 - \overrightarrow{p}(\boldsymbol{w} \mid \boldsymbol{c})$ can be thought of as follows. Given that $\overrightarrow{p}(\boldsymbol{w} \mid \boldsymbol{c})$ is the probability of the event that a string sampled from $\overrightarrow{p}(\cdot \mid \boldsymbol{c})$ has $\boldsymbol{w}$ as a prefix, $1 - \overrightarrow{p}(\boldsymbol{w} \mid \boldsymbol{c})$ is the complement of that event, i.e., it is the probability that a string sampled from $\overrightarrow{p}(\cdot \mid \boldsymbol{c})$ does not have $\boldsymbol{w}$ as a prefix.
|
| 68 |
+
|
| 69 |
+
In the remainder of this paper, we will distinguish three LMs: $p$ , the underlying distribution assumed to have generated the observed strings; $q$ , a parameterized model whose parameters we estimate; and $r$ , a pre-trained reference LM, potentially larger and trained on more data. A standard method of constructing a language model $p$ that approximates $q$ is maximum-likelihood estimation. Suppose we observe a bag of $N$ samples $\langle \pmb{y}^{(n)}\rangle_{n = 1}^{N}$ where $\pmb{y}^{(n)}\sim p$ , then we choose a model $q$ that minimizes the following crossentropy: $-\sum_{n = 1}^{N}\log q(\pmb{y}^{(n)})$ .
|
| 70 |
+
|
| 71 |
+
# 3 Defining Lexical Knowledge
|
| 72 |
+
|
| 73 |
+
Our goal is to evaluate word learning in LMs by following the trajectory of a distributional signature for each target word throughout LM training.
|
| 74 |
+
|
| 75 |
+
However, both in terms of trajectory extraction and signature design, there are many design choices. In this section, we explore and discuss the implications of a range of choice points in defining the distributional signature that is tracked during training. In §4, we discuss how to extract a trajectory from a timestamped sequence of distributional signatures.
|
| 76 |
+
|
| 77 |
+
Chang and Bergen (2022). The most direct predecessor to this work, Chang and Bergen (2022), considered a single distributional signature: the surprisal, under the LM, of the target words in contexts where the word occurs in a test corpus. This is a natural quantity to track during training, as it is equivalent to the cross-entropy loss per token, restricted only to samples from a single class. In our notation, they consider
|
| 78 |
+
|
| 79 |
+
$$
|
| 80 |
+
\widehat {\sigma} _ {+} (\boldsymbol {w}) \stackrel {\text {d e f}} {=} - \frac {1}{M} \sum_ {m = 1} ^ {M} \log \overrightarrow {q} (\boldsymbol {w} \mid \boldsymbol {c} ^ {(m)}), \tag {5}
|
| 81 |
+
$$
|
| 82 |
+
|
| 83 |
+
where $\overrightarrow{q}$ is the LM being analyzed, and the contexts $c^{(m)}$ are contexts taken from a corpus where they occur before the word $\pmb{w}$ , which we refer to as positive contexts for $\pmb{w}$ . We observe that—under the assumption that the positive contexts are sampled from the ground truth context distribution, i.e., $c^{(m)}\sim \overrightarrow{p_{\kappa}} (\cdot |\mathbf{w})$ Eq. (5) is a Monte Carlo estimator of the quantity
|
| 84 |
+
|
| 85 |
+
$$
|
| 86 |
+
\sigma_ {+} (\boldsymbol {w}) \stackrel {\text {d e f}} {=} - \sum_ {\boldsymbol {c} \in \Sigma^ {*}} \vec {p _ {\kappa}} (\boldsymbol {c} \mid \boldsymbol {w}) \log \vec {q} (\boldsymbol {w} \mid \boldsymbol {c}). \tag {6}
|
| 87 |
+
$$
|
| 88 |
+
|
| 89 |
+
However, even in expectation, there is one salient manner in which Chang and Bergen's (2022) distributional signature misses potentially valuable distributional information about the target word: it fails to account for the LM's distributional knowledge about $w$ in negative contexts, where $w$ is not found. Beyond this limitation, this distributional signature represents only one element in a potentially very large design space; in the remainder of this section, we also explore additional distributional signatures.
|
| 90 |
+
|
| 91 |
+
Considering Negative Contexts. Knowing the distribution of $\pmb{w}$ requires not only knowing when the word is appropriate in context but also when it is inappropriate. Thus, we can instead study the LM's distribution in contexts sampled according to a word's negative context distribution $\overrightarrow{p_{\kappa}} (\cdot \mid \neg \pmb{w})$ , i.e., the context distribution over all those contexts that occur before a word that is not $\pmb{w}$ and does not have $\pmb{w}$ as a prefix; see §2. Thus, analogously
|
| 92 |
+
|
| 93 |
+
<table><tr><td></td><td>Positive</td><td>Negative</td><td>All</td></tr><tr><td>True</td><td>- ∑c∈Σ* pK(c | w) log q(w | c)</td><td>- ∑c∈Σ* pK(c | ¬w) log q(w | c)</td><td>- ∑c∈Σ* pK(c) log q(w | c)</td></tr><tr><td>Intrinsic</td><td>- ∑c∈Σ* qK(c | w) log q(w | c)</td><td>- ∑c∈Σ* qK(c | ¬w) log q(w | c)</td><td>- ∑c∈Σ* qK(c) log q(w | c)</td></tr><tr><td>Reference</td><td>∑c∈Σ* pK(c | w) |log q(w | c)/r'(w | c)</td><td>∑c∈Σ* pK(c | ¬w) |log q(w | c)/r'(w | c)</td><td>∑c∈Σ* pK(c) |log q(w | c)/r'(w | c)</td></tr></table>
|
| 94 |
+
|
| 95 |
+
Table 1: Overview of all distributional signatures measured in our experiments.
|
| 96 |
+
|
| 97 |
+
to Eq. (6), we define the following distributional signature:
|
| 98 |
+
|
| 99 |
+
$$
|
| 100 |
+
\sigma_ {-} (\boldsymbol {w}) \stackrel {\text {d e f}} {=} - \sum_ {\boldsymbol {c} \in \Sigma^ {*}} \overrightarrow {p _ {\kappa}} (\boldsymbol {c} \mid \neg \boldsymbol {w}) \log \overrightarrow {q} (\boldsymbol {w} \mid \boldsymbol {c}). \tag {7}
|
| 101 |
+
$$
|
| 102 |
+
|
| 103 |
+
Again, under the assumption that negative contexts are sampled from $p$ , i.e., $\boldsymbol{c}^{(m)} \sim \vec{p_{\kappa}} (\cdot \mid \neg \boldsymbol{w})$ , we can derive a Monte Carlo estimator as follows
|
| 104 |
+
|
| 105 |
+
$$
|
| 106 |
+
\widehat {\sigma} _ {-} (\boldsymbol {w}) \stackrel {\text {d e f}} {=} - \frac {1}{M} \sum_ {m = 1} ^ {M} \log \overrightarrow {q} (\boldsymbol {w} \mid \boldsymbol {c} ^ {(m)}). \tag {8}
|
| 107 |
+
$$
|
| 108 |
+
|
| 109 |
+
All contexts. Rounding out this series of related signatures, we design a distributional signature that considers an LM's predictions in all—both positive and negative—contexts
|
| 110 |
+
|
| 111 |
+
$$
|
| 112 |
+
\sigma_ {\pm} (\boldsymbol {w}) \stackrel {\text {d e f}} {=} - \sum_ {\boldsymbol {c} \in \Sigma^ {*}} \overrightarrow {p _ {\kappa}} (\boldsymbol {c}) \log \overrightarrow {q} (\boldsymbol {w} \mid \boldsymbol {c}), \tag {9}
|
| 113 |
+
$$
|
| 114 |
+
|
| 115 |
+
where $\overrightarrow{p_{\kappa}} (\cdot)$ represents the unconditional distribution over contexts.4 Analogously, we derive the following Monte Carlo estimator where $c^{(m)}\sim \overrightarrow{p_{\kappa}} (\cdot)$
|
| 116 |
+
|
| 117 |
+
$$
|
| 118 |
+
\widehat {\sigma} _ {\pm} (\boldsymbol {w}) \stackrel {\text {d e f}} {=} - \frac {1}{M} \sum_ {m = 1} ^ {M} \log \overrightarrow {q} (\boldsymbol {w} \mid \boldsymbol {c} ^ {(m)}). \tag {10}
|
| 119 |
+
$$
|
| 120 |
+
|
| 121 |
+
Intrinsic Signatures. We turn to a discussion of a different class of distributional signatures. Rather than taking the expectation with respect to the true context distribution, $\overrightarrow{p_{\kappa}} (\cdot \mid \boldsymbol {w})$ , we now consider an intrinsic signature, where we take the expectation with respect to the model $\overrightarrow{q}_{\kappa}(\cdot \mid \boldsymbol {w})$ . This yields three distributional signatures, analogous to those above, which are defined in the second row in Table 1. We term these $\sigma_{\mathrm{I + }}$ $\sigma_{\mathrm{I - }}$ and $\sigma_{\mathrm{I}\pm}$ respectively. We discuss the estimation of intrinsic distributional signatures in App. A.1.
|
| 122 |
+
|
| 123 |
+
Comparing to a reference distribution. The signatures $\sigma_{+},\sigma_{-}$ and $\sigma_{\pm}$ estimate the relationship between the model and the underlying distribution $p$ . However, the true language model $p$ may not be achievable--both due to the finite training data or the model class itself. Thus, it is also meaningful to compare $q$ to a reference distribution $r$ , which is assumed to be a larger LM trained on more data. Following this intuition, we define three reference signatures, listed in the third row of Table 1, denoted as $\sigma_{\mathrm{R} + }$ $\sigma_{\mathrm{R} - }$ ,and $\sigma_{\mathrm{R}\pm}$ , respectively, estimate them similarly as Eq. (5). These reference signatures are distance metrics between the target model and the reference distribution.
|
| 124 |
+
|
| 125 |
+
# 4 Analyzing Trajectories
|
| 126 |
+
|
| 127 |
+
Given our goal of studying the word acquisition process in LMs, we aim to study the trajectory of a signature $\sigma$ for various words throughout the training of the target LM. However, an entire trajectory may contain too much information for some analyses. In this section, we consider a family of statistics that can be extracted from the trajectory and review the main choice points in doing so.
|
| 128 |
+
|
| 129 |
+
Determining $A_oA$ by Thresholding While many statistics are possible, we focus on age of acquisition ( $A_oA$ ), which is a single number that should be interpreted as the point at which learning has advanced to a satisfactory degree. For human learners, Braginsky et al. (2016) define $A_oA$ as the age at which 50% of children are such that their caregivers report them as understanding the word. Chang and Bergen (2022) apply this thresholding approach to LMs. Given a trajectory, they define the $A_oA$ to be the first time step at which the signature reaches a threshold defined as $\tau\%$ of the way between some initial value representing the beginning of learning and some final value representing the endpoint of learning. Unfor-
|
| 130 |
+
|
| 131 |
+
tunately, thresholding in this way is only suitable when $\widehat{\sigma}$ exhibits (roughly) monotonic change over time. While this is true of some signatures we consider, we find empirically that $\widehat{\sigma}_{+}$ , $\widehat{\sigma}_{\mathrm{I} \pm}$ , $\widehat{\sigma}_{\mathrm{I} +}$ , and $\widehat{\sigma}_{\mathrm{I} -}$ are exceptions. Thus, we adopt a different approach to extracting AoAs based on the notion of a Cauchy sequence. Intuitively, we say that the target word is learned at the point in the trajectory where the value of the signature becomes close to its neighboring points in the trajectory. Our approach is defined formally in App. D. For the sake of uniformity, we apply this approach to all signatures and leave an exploration of thresholding approaches for suitable signatures to future work.
|
| 132 |
+
|
| 133 |
+
Smoothing the Trajectory Empirical trajectories may be noisy due to estimation errors or local instabilities during training (Datta et al., 2023). Thus, we consider several techniques for smoothing the trajectory. One approach to smoothing is parametric curve fitting; Braginsky et al. (2019) and Chang and Bergen (2022) employ such an approach and assume trajectories follow the form of a sigmoid curve. However, parametric curve fitting requires the modeler to assume the functional form of the curve. If the functional form of the curve is unknown, one can instead smooth the curve using a non-parametric method, e.g., a moving average or a generalized additive model (Hastie and Tibshirani, 1986), as done by Chang et al. (2024). For simplicity, we opt for a moving average to smooth the trajectories in this paper.
|
| 134 |
+
|
| 135 |
+
# 5 Methods
|
| 136 |
+
|
| 137 |
+
# 5.1 Language Models
|
| 138 |
+
|
| 139 |
+
We train several language models to explore our proposed distributional signatures.
|
| 140 |
+
|
| 141 |
+
Training Data. We use three datasets previously released with train/test splits for training and evaluating our LMs. (i) Unified: This dataset was compiled by Constantinescu et al. (2024). It consists of approximately 600M words sampled from a combination of three corpora: Project Gutenberg, Wikipedia, and OpenSubtitles (Lison and Tiedemann, 2016). Given that a typical 13-year-old
|
| 142 |
+
|
| 143 |
+
person may be exposed to around 100M words (Gilkerson et al., 2017), this dataset is not fully representative of the actual input to children, although it contains a large proportion of spoken language. (ii) BabyLM: This is the 100M text-only corpus from the second BabyLM Challenge (Choshen et al., 2024). The dataset is designed to be relatively developmentally plausible while also containing the amount of input that a typical adolescent is exposed to. It includes child-directed speech from CHILDES (MacWhinney, 2000) and children's stories from Project Gutenberg (Gerlach and Font-Clos, 2020), as well as dialogue such as BNC and the Switchboard Corpus (Stolcke et al., 2000), along with Simple English Wikipedia and Open Subtitles. (iii) CHILDES: This is the CHILDES subset taken from BabyLM, consisting of 29M tokens of child-directed speech. These datasets constitute an attempt to balance developmental plausibility against quantity. Our motivation for training on datasets such as BabyLM and CHILDES is to observe whether more developmentally plausible training distributions result in more human-like word learning trajectories.
|
| 144 |
+
|
| 145 |
+
Signature Estimation. To estimate the signatures for each word, we sample 100 positive and 100 negative contexts from the BabyLM test set. To ensure fair cross-model comparisons, we use the same test contexts for all models, regardless of training data.
|
| 146 |
+
|
| 147 |
+
Models. We train GPT-2 from scratch following the training procedure described by Radford et al. (2019). To reduce variance in performance due to random initialization, we train three variations of each model using different random seeds. To compute the reference signatures $(\widehat{\sigma}_{\mathrm{R + }}$ $\widehat{\sigma}_{\mathrm{R - }}$ $\widehat{\sigma}_{\mathrm{R}\pm})$ we use Llama-3.1-8B as the reference distribution $r$ . Full details regarding the hyperparameters, training duration, and loss curves are provided in App. C. As we are interested in analyzing the learning trajectories for models, it is important that they are trained for a reasonable duration. For models trained on BabyLM and CHILDES we apply early stopping, i.e., we choose the best model on a held-out development set, as we found that models eventually overfit. For models trained on Unified we train for 30,000 steps, or 12 epochs, following (Constantinescu et al., 2024). We estimate that Chang and Bergen (2022) trained
|
| 148 |
+
|
| 149 |
+

|
| 150 |
+
|
| 151 |
+

|
| 152 |
+
|
| 153 |
+

|
| 154 |
+
|
| 155 |
+

|
| 156 |
+
|
| 157 |
+

|
| 158 |
+
|
| 159 |
+

|
| 160 |
+
Figure 1: Trajectories for a sample of 8 words for LMs trained on the Unified dataset. We sample one high-frequency (solid line) and one low-frequency (dashed) word from each of the categories: FUNCTION WORDS, NOUNS, ADJECTIVES, VERBS. The $y$ -axis represents the value of the estimator in all $\widehat{\sigma}$ plots. For the children, it represents the proportion of them that produced the word.
|
| 161 |
+
|
| 162 |
+

|
| 163 |
+
|
| 164 |
+

|
| 165 |
+
|
| 166 |
+

|
| 167 |
+
|
| 168 |
+

|
| 169 |
+
|
| 170 |
+

|
| 171 |
+
|
| 172 |
+
their models on about $1.6 \times 10^{9}$ input tokens (counting repetitions).<sup>8</sup>
|
| 173 |
+
|
| 174 |
+
# 5.2 The Wordbank Corpus
|
| 175 |
+
|
| 176 |
+
Child AoA data comes from the North American English portion of the Wordbank database (Frank et al., 2017). For each word and month, Wordbank provides the proportion of children in the study that have produced the word by that point. The AoA is defined as the first month by which at least $50\%$ of children have produced that word (Goodman et al., 2008; Braginsky et al., 2016). We exclude words for which we were not able to sample 100 positive context types from the BabyLM dataset, leaving us with 305 words. The words in Wordbank are divided into 4 different lexical categories: NOUNS (101), PREDICATES (124), FUNCTION WORDS (45) and OTHER $(49)^{9}$ . The PREDICATES category is further divided into ADJECTIVES and VERBS. $^{10}$
|
| 177 |
+
|
| 178 |
+
# 6 Examining LM Learning Trajectories
|
| 179 |
+
|
| 180 |
+
Before quantitatively comparing LM and child word learning trajectories in §7, we conduct several analyses focusing solely on LM trajectories.
|
| 181 |
+
|
| 182 |
+
# 6.1 Case studies
|
| 183 |
+
|
| 184 |
+
We perform several case studies by inspecting the learning trajectories and $AoA$ scores for humans and each distributional signature from §3. We analyze the trajectories and $AoA$ scores for LMs trained on the Unified dataset for a sample of 8 words: two FUNCTION WORDS, two NOUNS, two ADJECTIVES, and two VERBS. For each category, one word is chosen from the 10 most and 10 least frequent (for the Unified dataset).
|
| 185 |
+
|
| 186 |
+
Fig. 1 shows the trajectories for these words, and Table A5 gives the $AoA$ scores. For most signatures, we observe that the higher-frequency word from a category has an earlier $AoA$ than the corresponding lower-frequency word. We also observe that most signatures yield a wide range of $AoA$ scores, but others—particularly $\widehat{\sigma}_{-}$ —show very similar (and late) $AoAs$ for all words we inspect. Table A6 shows the first and last learned words for each signature. Generally, we find that high-frequency words and function words are learned first.
|
| 187 |
+
|
| 188 |
+
# 6.2 Convergence behavior
|
| 189 |
+
|
| 190 |
+
As we rely on the Cauchy criterion to extract $AoA$ scores, we now examine how different signatures converge. Fig. 1 shows that the shape of the learning trajectories varies between signatures. Within a given signature, trajectory shapes are internally consistent to varying degrees. As expected, the reference signatures are mostly monotonically
|
| 191 |
+
|
| 192 |
+

|
| 193 |
+
Figure 2: Pearson's correlation coefficients between different signatures and children's $A_oA$ (C) across three datasets: CHILDES, BabyLM, and Unified.
|
| 194 |
+
|
| 195 |
+
decreasing, indicating that the probability of the word of interest under the LM becomes closer to the ground truth after more iterations. Furthermore, for the corpus-based signatures, $\widehat{\sigma}_{+}$ trajectories are decreasing, whereas $\widehat{\sigma}_{-}$ are increasing. On the other hand, the intrinsic signatures and $\widehat{\sigma}_{\pm}$ are not consistently increasing or decreasing.
|
| 196 |
+
|
| 197 |
+
We compute the $AoA$ scores for a given signature using a range of values for $\epsilon$ . Fig. A4 shows how many words failed to converge under different thresholds $\epsilon$ . We find that the vast majority of trajectories converge with $\epsilon = 0.15$ . For lower values of $\epsilon$ , we see as many as half of all word trajectories failing to converge on the CHILDES dataset. However, on the larger datasets BabyLM and Unified, we observe high rates of convergence across the board. Finally, it is the intrinsic signatures and $\widehat{\sigma}_{\pm}$ that show the lowest rates of convergence. As discussed above, these are precisely the same signatures that do not have an internally consistent shape. Furthermore, the figures in App. I show correlations for each pair of thresholds. With a few exceptions for extreme values, different thresholds still yield $AoA$ scores that are highly correlated. Therefore, in all our results (including those discussed above), we apply an intermediate value of $\epsilon = 0.07$ .
|
| 198 |
+
|
| 199 |
+
# 6.3 Comparing Signatures
|
| 200 |
+
|
| 201 |
+
Another important question is whether different signatures give similar $AoA$ scores to each other. App. J shows the correlation matrix of $AoA$ scores
|
| 202 |
+
|
| 203 |
+
for each signature. First, the correlations are all notably higher for LMs trained on the Unified dataset. Together with the finding that convergence rates are higher for this dataset, this supports the conclusion that $A_oA$ scores become more consistent as training time increases. We find that most pairs of signatures are weakly or negatively correlated with few exceptions. In general, the various positive signatures $(\widehat{\sigma}_{+}, \widehat{\sigma}_{\mathrm{I}+}, \widehat{\sigma}_{\mathrm{R}+})$ have relatively strong correlations. Across all datasets, the most strongly correlated pair is $\widehat{\sigma}_{+}$ and $\widehat{\sigma}_{\mathrm{R}+}$ . The negative signatures, but $\widehat{\sigma}_{\mathrm{I}-}$ , have weak correlations with other signatures, except for the pair $\widehat{\sigma}_{-}, \widehat{\sigma}_{\pm}$ which have very similar estimators.
|
| 204 |
+
|
| 205 |
+
# 7 Human vs. LM Learning Trajectories
|
| 206 |
+
|
| 207 |
+
We now examine the similarities and differences between word learning in LMs and humans.
|
| 208 |
+
|
| 209 |
+
# 7.1 Comparing Human and LM AoAs
|
| 210 |
+
|
| 211 |
+
We begin simply by measuring the Pearson correlation between human AoAs and the LM AoAs from each signature. These values are plotted in Fig. 2. Overall, we observe very weak or negative correlations. We find that the signature that correlates most changes depending on the datasets, but no correlation exceeds 0.31 (either positive or negative). The strongest positive correlations occur in BabyLM for $\widehat{\sigma}_{\mathrm{I}\pm}$ and $\widehat{\sigma}_{\mathrm{R}-}$ , while the strongest negative correlations are from Unified and BabyLM for the $\widehat{\sigma}_{\mathrm{I}-}$ signature.
|
| 212 |
+
|
| 213 |
+
# 7.2 Predicting AoAs from Features
|
| 214 |
+
|
| 215 |
+
We now investigate which factors predict human and LM AoAs, and compare whether these factors have similar effects. Braginsky et al. (2016) identified several interpretable features that predict human AoAs. Chang and Bergen (2022) previously fit linear models to predict LM AoAs using these features. We extend this analysis to our set of signatures. Specifically, we take the AoA scores of children and of each of our signatures as our dependent variables, and, additionally, consider the following predictors $^{12}$ that Braginsky et al. (2016) studied: (i) log frequency with respect to each LM's training dataset for LMs and with respect to CHILDES for children, (ii) number of characters, i.e., the number of symbols from $\Sigma$ in the
|
| 216 |
+
|
| 217 |
+
<table><tr><td colspan="2">Metadata</td><td colspan="5">Single-Predictor</td><td colspan="2">Multi-Predictor</td></tr><tr><td>AoA type</td><td>#words</td><td>Log freq.</td><td>Conc.</td><td>#chars</td><td>MLU</td><td>Lex. cat.</td><td>Full</td><td>Full \ Log freq.</td></tr><tr><td>Children</td><td>262</td><td>0.004</td><td>0.26</td><td>-0.003</td><td>0.032</td><td>0.20</td><td>0.417</td><td>0.363</td></tr><tr><td>\( \widehat{\sigma}_{+} \)</td><td>262</td><td>0.614</td><td>0.298</td><td>0.135</td><td>0.072</td><td>0.304</td><td>0.616</td><td>0.392</td></tr><tr><td>\( \widehat{\sigma}_{-} \)</td><td>251</td><td>0.063</td><td>0.012</td><td>0.023</td><td>0.0</td><td>0.047</td><td>0.083</td><td>0.065</td></tr><tr><td>\( \widehat{\sigma}_{\pm} \)</td><td>245</td><td>0.542</td><td>0.265</td><td>0.142</td><td>0.04</td><td>0.294</td><td>0.546</td><td>0.379</td></tr><tr><td>\( \widehat{\sigma}_{\mathrm{I} + } \)</td><td>215</td><td>0.4</td><td>0.274</td><td>0.107</td><td>0.162</td><td>0.201</td><td>0.463</td><td>0.382</td></tr><tr><td>\( \widehat{\sigma}_{\mathrm{I}-} \)</td><td>197</td><td>0.234</td><td>0.168</td><td>0.028</td><td>0.035</td><td>0.126</td><td>0.256</td><td>0.179</td></tr><tr><td>\( \widehat{\sigma}_{\mathrm{I}\pm} \)</td><td>201</td><td>0.052</td><td>0.012</td><td>0.005</td><td>0.006</td><td>0.05</td><td>0.063</td><td>0.05</td></tr><tr><td>\( \widehat{\sigma}_{\mathrm{R} + } \)</td><td>262</td><td>0.572</td><td>0.295</td><td>0.118</td><td>0.088</td><td>0.296</td><td>0.582</td><td>0.377</td></tr><tr><td>\( \widehat{\sigma}_{\mathrm{R}-} \)</td><td>262</td><td>0.013</td><td>0.0</td><td>0.001</td><td>0.007</td><td>0.003</td><td>0.033</td><td>0.013</td></tr><tr><td>\( \widehat{\sigma}_{\mathrm{R}\pm} \)</td><td>256</td><td>0.292</td><td>0.159</td><td>0.083</td><td>0.04</td><td>0.122</td><td>0.3</td><td>0.2</td></tr></table>
|
| 218 |
+
|
| 219 |
+
Table 2: Summary of model results for Child AoA and LMs trained on the Unified dataset. Note: lexical category does not contain the category OTHER which includes words that could be assigned to NOUNS, PREDICATES, or FUNCTION WORDS.
|
| 220 |
+
|
| 221 |
+
word, (iii) concreteness judgments, collected from human subjects by Brysbaert et al. (2014), which indicates the extent to which a word is concrete, measured on a scale from 1 (very abstract) to 5 (very concrete), (iv) mean length of the utterances (MLU) with respect to each LM's training dataset for LMs and with respect to CHILDDES for children, and (v) lexical category NOUN, PREDICATE, FUNCTION WORD, and OTHER, annotated by Frank et al. (2017, 2021).
|
| 222 |
+
|
| 223 |
+
Do similar factors influence LMs and Children AoAs? Regression analyses for LMs trained on Unified and for children are presented in Table 2. For children, the adjusted $R^2$ with all predictors is 0.417. The strongest individual predictors of children's AoA are concreteness and lexical category. Log frequency is a notably weak predictor on its own, though it does still contribute significant predictive power when added to a model including all other predictors. These results largely reproduce those of (Braginsky et al., 2016) and Chang and Bergen (2022), the latter of whom reported an adjusted $R^2$ of 0.43 for predicting child AoA from all features using a larger vocabulary of 571 words. In predicting LMs' AoA, we identify two main patterns. First, the signatures $\widehat{\sigma}_{\mathrm{I}\pm}$ and $\widehat{\sigma}_{-}$ exhibit negligible relationships with any predictor. Second, among the other signatures, log frequency is consistently the most predictive factor, mirroring the findings of Chang and Bergen (2022). The next most predictive factors are concreteness and lexical category. See the figures in App. G or visualizations of AoAs against each predictor.
|
| 224 |
+
|
| 225 |
+
For brevity, we focus on results from the Unified dataset. Across all signatures and datasets, language models (LMs) exhibit the opposite pattern from children when it comes to log frequency and concreteness, with more frequent words having a lower AoA. While children tend to acquire concrete words earlier, language models appear to struggle more with processing concrete words and perform better with abstract ones. Furthermore, although children's AoA does not display a significant correlation with number of characters, most LM signatures reveal positive correlations. The exceptions are $\widehat{\sigma}_{\mathrm{R - }}$ and $\widehat{\sigma}_{\mathrm{I\pm}}$ (for BabyLM and CHILDES), which show slightly negative correlations. Lastly, $MLU$ follows a similar pattern in both children and language models.
|
| 226 |
+
|
| 227 |
+
Does more developmentally plausible training data result in more human-like learning patterns? From Fig. 2, we observe that the models trained on BabyLM tend to exhibit the most human-like learning trajectories according to some signatures; however, as stated above, LM trajectories are far from human-like across the board. This finding is surprising given that CHILDES, which comes from discourses between caregivers and young children, most closely resembles the input received by the young children studied in Wordbank. However, since all three datasets differ greatly in size, we cannot determine whether this result is due to data domain or dataset size. By analyzing Table A8, we find that there is a positive effect of the training set size on the predictability of $AoA$ . We speculate that this may explain why Chang and Bergen
|
| 228 |
+
|
| 229 |
+
(2022), who trained models on much larger datasets than ours, reported higher predictability for model AoA scores (for $\widehat{\sigma}_{+}$ ). Additionally, we note that in the CHILDES dataset, log frequency is not significantly more predictive than other factors, in contrast to other datasets. Overall, the results do not exhibit any notable human-like patterns.
|
| 230 |
+
|
| 231 |
+
# 8 Discussion and Conclusion
|
| 232 |
+
|
| 233 |
+
Our main objective was to explore the space of distributional signatures of words more comprehensively, with an application to understanding word learning in LMs. We showed that the distributional test adopted by Chang and Bergen (2022) and Portelance et al. (2023) can be viewed as an estimator of a more general distributional signature. This insight also enabled us to define a broader family of signatures that follow a clear typology. However, the question remains: which of these evaluations should be the focus of researchers interested in studying word learning in LMs? One of our key findings in §6.3 is that many of these signatures are complementary. This is true with respect to children's AoAs as well as in comparison to each other. Arguably, considering both positive and negative contexts provides a more complete picture of the LM's distributional knowledge, and comparing the LM's distribution against an LLM allows the signature to better reflect the gradient of the ground truth distribution, which is not directly observable. Nonetheless, each signature we propose has a clear interpretation and may be useful for specific applications, though extracting usable AoA scores is not always feasible.
|
| 234 |
+
|
| 235 |
+
In §7 we found that we could not predict children's AoAs well from any of our proposed distributional signatures. This result might seem somewhat surprising in light of Portelance et al.'s (2023) finding that LM surprisal improves predictions of children's AoAs. However, we note that that work uses $\widehat{\sigma}_{+}$ at the end of training as a predictor of AoA, rather than the AoA of the model as determined by a specific distributional signature. Our results do further corroborate Chang and Bergen's (2022) conclusions on this question, and significantly expand them to a wider variety of signatures. Additionally, our findings contribute to a growing body of work finding specific differences between the language learning patterns of humans and LMs in other domains (e.g., Evanson et al., 2023; Constantinescu et al., 2024). On the other
|
| 236 |
+
|
| 237 |
+
hand, Zhuang et al. (2024b,a) show that multimodal LMs can exhibit more human-like learning trajectories and also introduce a novel training objective that further improves human-likeness.
|
| 238 |
+
|
| 239 |
+
Future work should apply our distributional tests to these and other potentially more human-like training procedures. Besides learning in a world grounded in sensory experience, children also learn through interaction both with the physical world and with other agents (Clark, 2018; Nikolaus and Fourtassi, 2023). Moreover, unlike LMs, children have constraints on production, going through one-word and two-word utterance phases (Bloom, 1970). These factors no doubt influence the kinds of words children use early in development and may account for the precedence of concrete words. There are only a few examples of training regimes for LMs inspired by interaction (Lazaridou et al., 2020; Nikolaus and Fourtassi, 2021b; Ma et al., 2024). Furthermore, the reliance on stochastic gradient descent and cross-entropy loss likely skew learning trajectories in LMs in ways that are not entirely human-like. There are many opportunities to explore more human-like LM training, and we expect that word learning will be an important evaluation of human likeness as these are explored.
|
| 240 |
+
|
| 241 |
+
Having better mapped out the space of distributional signatures of lexical knowledge, our work paves the way for comparing trajectories of language models and humans. Our findings provide strong empirical support that there are large differences between how human and artificial language learners develop throughout learning and draw attention to the fact that there is significant work to be done to explore pre-training methods and datasets that result in more developmentally plausible language models.
|
| 242 |
+
|
| 243 |
+
# 9 Limitations
|
| 244 |
+
|
| 245 |
+
Our study has several limitations. First, while we are interested in the possibility that LMs can be used as cognitive models and we attempt to use developmentally plausible data, our LMs are not trained in a way that is maximally similar to how humans learn. They lack exposure to speech, grounding, and interaction with other agents, all of which may significantly influence word learning. Second, while our proposed true and reference signatures are weighted by a distribution $\vec{p}_{\kappa}$ , we only estimate this distribution using Monte Carlo estimation. Future work should explore whether alterna
|
| 246 |
+
|
| 247 |
+
tive estimation techniques, such as those based on LLMs, yield qualitatively different results. Third, the specifics of our findings may be sensitive to our training setup. Future work should examine whether different pre-training settings yield qualitatively different results, i.e., whether our findings are robust across various setups. Finally, our study focuses on extracting AoAs from learning trajectories, but AoA is just one statistic that can be extracted from the learning trajectory.
|
| 248 |
+
|
| 249 |
+
# Acknowledgments
|
| 250 |
+
|
| 251 |
+
We thank the reviewers and ACs for ARR whose detailed feedback led to many improvements in our work. AW was supported by an ETH Postdoctoral Fellowship.
|
| 252 |
+
|
| 253 |
+
# References
|
| 254 |
+
|
| 255 |
+
Uri Berger, Gabriel Stanovsky, Omri Abend, and Lea Frermann. 2022. A computational acquisition model for multimodal word categorization. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3819-3835.
|
| 256 |
+
Lois Bloom. 1970. Language Development: Form and Function in Emerging Grammars. MIT Press.
|
| 257 |
+
Mika Braginsky, Daniel Yurovsky, Virginia A. Marchman, and Michael C. Frank. 2019. Consistency and variability in children's word learning across languages. Open Mind: Discoveries in Cognitive Science, 3:52-67.
|
| 258 |
+
Mika Braginsky, Daniel Yurovsky, Virginia A. Marchman, and Mike C. Frank. 2016. From uh-oh to tomorrow: Predicting age of acquisition for early words across languages. Proceedings of the Annual Meeting of the Cognitive Science Society, 38:1691-1696.
|
| 259 |
+
Elia Bruni, Gemma Boleda, Marco Baroni, and Nam-Khanh Tran. 2012. Distributional semantics in technicolor. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 136-145, Jeju Island, Korea. Association for Computational Linguistics.
|
| 260 |
+
Marc Brysbaert, Amy Beth Warriner, and Victor Kuperman. 2014. Concreteness ratings for 40 thousand generally known English word lemmas. Behavior Research Methods, 46(3):904-911.
|
| 261 |
+
Erin M. Buchanan, K. D. Valentine, and Nicholas P. Maxwell. 2019. English semantic feature production norms: An extended database of 4436 concepts. Behavior Research Methods, 51(4):1849-1863.
|
| 262 |
+
Tyler A. Chang and Benjamin K. Bergen. 2022. Word acquisition in neural language models. Transactions
|
| 263 |
+
|
| 264 |
+
of the Association for Computational Linguistics, 10:1-16.
|
| 265 |
+
Tyler A. Chang, Zhuowen Tu, and Benjamin K. Bergen. 2024. Characterizing learning curves during language model pre-training: Learning, forgetting, and stability. Transactions of the Association for Computational Linguistics, 12:1346-1362.
|
| 266 |
+
Leshem Choshen, Ryan Cotterell, Michael Y. Hu, Tal Linzen, Aaron Mueller, Candace Ross, Alex Warstadt, Ethan Wilcox, Adina Williams, and Chengxu Zhuang. 2024. [Call for papers] the 2nd babylm challenge: Sample-efficient pretraining on a developmentally plausible corpus. Preprint, arXiv:2404.06214.
|
| 267 |
+
Leshem Choshen, Guy Hacohen, Daphna Weinshall, and Omri Abend. 2022. The grammar-learning trajectories of neural language models. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, volume 1, pages 8281-8297.
|
| 268 |
+
Eve V. Clark. 2018. Conversation and Language Acquisition: A Pragmatic Approach. Language Learning and Development, 14(3):170-185.
|
| 269 |
+
Ionut Constantinescu, Tiago Pimentel, Ryan Cotterell, and Alex Warstadt. 2024. Investigating critical period effects in language acquisition through neural language models. Transactions of the Association for Computational Linguistics.
|
| 270 |
+
Arghya Datta, Subhrangshu Nandi, Jingcheng Xu, Greg Ver Steeg, He Xie, Anoop Kumar, and Aram Galstyan. 2023. Measuring and mitigating local instability in deep neural networks. In *Findings of the Association for Computational Linguistics: ACL* 2023, pages 2810-2823, Toronto, Canada. Association for Computational Linguistics.
|
| 271 |
+
Scott Deerwester, Susan T. Dumais, George W. Furnas, Thomas K. Landauer, and Richard Harshman. 1990. Indexing by latent semantic analysis. Journal of the American Society for Information Science, 41(6):391-407.
|
| 272 |
+
Emmanuel Dupoux. 2018. Cognitive Science in the era of Artificial Intelligence: A roadmap for reverse-engineering the infant language-learner. Cognition, 173:43-59.
|
| 273 |
+
Linnea Evans, Yair Lakretz, and Jean Rémi King. 2023. Language acquisition: do children and language models follow similar learning stages? In Findings of the Association for Computational Linguistics, pages 12205-12218.
|
| 274 |
+
Larry Fenson, Virginia Marchman, Donna Thal, Phillip Dale, Steven Reznick, and Elizabeth Bates. 2013. MacArthur-Bates Communicative Development Inventories. Springer.
|
| 275 |
+
|
| 276 |
+
Lev Finkelstein, Evgeniy Gabrilovich, Yossi Matias, Ehud Rivlin, Zach Solan, Gadi Wolfman, and Eytan Ruppin. 2001. Placing Search in Context: The Concept Revisited. In Proceedings of the 10th international conference on World Wide Web, pages 406-414.
|
| 277 |
+
J. R. Firth. 1957. A synopsis of linguistic theory, 1930-1955. Studies in Linguistic Analysis.
|
| 278 |
+
Michael C. Frank, Mika Braginsky, Daniel Yurovsky, and Virginia A. Marchman. 2021. Variability and consistency in early language learning: The Wordbank project. Variability and consistency in early language learning: The Wordbank project. The MIT Press.
|
| 279 |
+
Micheal C. Frank, Mika Braginsky, Daniel Yurovsky, and Virginia A. Marchman. 2017. Wordbank: an open repository for developmental vocabulary data. Journal of Child Language, 44(3):677-694.
|
| 280 |
+
Martin Gerlach and Francesc Font-Clos. 2020. A standardized project Gutenberg corpus for statistical analysis of natural language and quantitative linguistics. Entropy, 22(1).
|
| 281 |
+
Daniela Gerz, Ivan Vulić, Felix Hill, Roi Reichart, and Anna Korhonen. 2016. SimVerb-3500: A large-scale evaluation set of verb similarity. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2173–2182, Austin, Texas. Association for Computational Linguistics.
|
| 282 |
+
Jill Gilkerson, Jeffrey A. Richards, Steven F. Warren, Judith K. Montgomery, Charles R. Greenwood, D. Kimbrough Oller, John H. L. Hansen, and Terrance D. Paul. 2017. Mapping the early language environment using all-day recordings and automated analysis. American Journal of Speech-Language Pathology, 26(2):248-265.
|
| 283 |
+
Judith C. Goodman, Philip S. Dale, and Ping Li. 2008. Does frequency count? parental input and the acquisition of vocabulary. Journal of Child Language, 35(3):515-531.
|
| 284 |
+
John Hale. 2001. A probabilistic Earley parser as a psycholinguistic model. In Second Meeting of the North American Chapter of the Association for Computational Linguistics.
|
| 285 |
+
Zellig Harris. 1954. Distributional structure. Word, 10(2-3):146-162.
|
| 286 |
+
Trevor Hastie and Robert Tibshirani. 1986. Generalized Additive Models. Statistical Science, 1(3):297-310.
|
| 287 |
+
Joeri R. Hermans, Gerasimos Spanakis, and Rico Möckel. 2017. Accumulated gradient normalization. In Proceedings of the Ninth Asian Conference on Machine Learning, volume 77 of Proceedings of Machine Learning Research, pages 439-454, Yonsei University, Seoul, Republic of Korea. PMLR.
|
| 288 |
+
|
| 289 |
+
Felix Hill, Roi Reichart, and Anna Korhonen. 2015. SimLex-999: Evaluating semantic models with (genuine) similarity estimation. Computational Linguistics, 41(4):665-695.
|
| 290 |
+
Thomas Hofmann. 1999. Probabilistic latent semantic analysis. In UAI, volume 99, pages 289-296.
|
| 291 |
+
Thomas K. Landauer and Susan T. Dumais. 1997. A solution to Plato's problem: The latent semantic analysis theory of acquisition, induction, and representation of knowledge. Psychological Review, 104(2):211.
|
| 292 |
+
Marvin Lavechin, Maureen De Seyssel, Lucas Gautheron, Emmanuel Dupoux, and Alejandrina Cristia. 2022. Reverse engineering language acquisition with child-centered long-form recordings. Annual Review of Linguistics, 8(1):389-407.
|
| 293 |
+
Angeliki Lazaridou, Anna Potapenko, and Olivier Tieleman. 2020. Multi-agent Communication meets Natural Language: Synergies between Functional and Structural Language Learning. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7663-7674, Online. Association for Computational Linguistics.
|
| 294 |
+
Alessandro Lenci. 2008. Distributional semantics in linguistic and cognitive research. *Italian Journal of Linguistics*, 20(1):1-31.
|
| 295 |
+
Roger Levy. 2008. Expectation-based syntactic comprehension. Cognition, 106(3):1126-1177.
|
| 296 |
+
Tal Linzen. 2019. What can linguistics and deep learning contribute to each other? Language, 95(1):e99-e108.
|
| 297 |
+
Pierre Lison and Jörg Tiedemann. 2016. OpenSubtitles2016: Extracting large parallel corpora from movie and TV subtitles. In Proceedings of the Tenth International Conference on Language Resources and Evaluation, pages 923-929.
|
| 298 |
+
Ziqiao Ma, Zekun Wang, and Joyce Chai. 2024. Babysit: A Language Model From Scratch: Interactive Language Learning by Trials and Demonstrations. arXiv preprint. ArXiv:2405.13828 [cs].
|
| 299 |
+
Brian MacWhinney. 2000. The CHILDES Project: Tools for analyzing talk: The database, Vol. 2, 3rd ed. Lawrence Erlbaum Associates Publishers.
|
| 300 |
+
Alec Marantz. 2001. Words. WCCFL XX Handout, USC.
|
| 301 |
+
Rebecca Marvin and Tal Linzen. 2018. Targeted Syntactic Evaluation of Language Models. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1192-1202.
|
| 302 |
+
AI @ Meta. 2024. The Llama 3 herd of models. Preprint, arXiv:2407.21783.
|
| 303 |
+
|
| 304 |
+
Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in Neural Information Processing Systems, volume 26.
|
| 305 |
+
Mitja Nikolaus and Abdellah Fourtassi. 2021a. Evaluating the acquisition of semantic knowledge from cross-situational learning in artificial neural networks. In Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics, pages 200-210. Association for Computational Linguistics.
|
| 306 |
+
Mitja Nikolaus and Abdellah Fourtassi. 2021b. Modeling the interaction between perception-based and production-based learning in children's early acquisition of semantic knowledge. In Proceedings of the 25th conference on computational natural language learning, pages 391-407, Online. Association for Computational Linguistics.
|
| 307 |
+
Mitja Nikolaus and Abdellah Fourtassi. 2023. Communicative Feedback in language acquisition. New Ideas in Psychology, 68:100985.
|
| 308 |
+
Andreas Opedal, Eleanor Chodroff, Ryan Cotterell, and Ethan Wilcox. 2024. On the role of context in reading time prediction. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 3042-3058.
|
| 309 |
+
OpenAI. 2022. ChatGPT: Optimizing language models for dialogue.
|
| 310 |
+
Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, pages 1532-1543.
|
| 311 |
+
Eva Portelance, Yuguang Duan, Michael C. Frank, and Gary Lupyan. 2023. Predicting age of acquisition for children's Early vocabulary in five languages using language model surprisal. Cognitive Science, 47(9).
|
| 312 |
+
Eva Portelance, Michael C. Frank, and Dan Jurafsky. 2024. Learning the meanings of function words from grounded language using a visual question answering model. Cognitive Science, 48(5).
|
| 313 |
+
Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9.
|
| 314 |
+
Enrico Santus, Anna Gladkova, Stefan Evert, and Alessandro Lenci. 2016. The CogALex-V shared task on the corpus-based identification of semantic relations. In Proceedings of the 5th Workshop on Cognitive Aspects of the Lexicon (CogALex - V), pages 69-79. The COLING 2016 Organizing Committee.
|
| 315 |
+
Andreas Stolcke, Klaus Ries, Noah Coccaro, Elizabeth Shriberg, Rebecca Bates, Daniel Jurafsky, Paul Taylor, Rachel Martin, Carol Van Ess-Dykema, and
|
| 316 |
+
|
| 317 |
+
Marie Meeter. 2000. Dialogue act modeling for automatic tagging and recognition of conversational speech. Computational Linguistics, 26(3):339-373.
|
| 318 |
+
Wai Keen Vong, Wentao Wang, A. Emin Orhan, and Brenden M. Lake. 2024. Grounded language acquisition through the eyes and ears of a single child. Science, 383(6682):504-511.
|
| 319 |
+
Alex Warstadt and Samuel R Bowman. 2022. What artificial neural networks can tell us about human language acquisition. In *Algebraic Structures in Natural Language*, pages 17–60. CRC Press. Publisher: CRC Press.
|
| 320 |
+
Alex Warstadt, Alicia Parrish, Haokun Liu, Anhad Mohananey, Wei Peng, Sheng-Fu Wang, and Samuel R. Bowman. 2020. BLiMP: The benchmark of linguistic minimal pairs for English. Transactions of the Association for Computational Linguistics, 8:377-392.
|
| 321 |
+
Ludwig Wittgenstein. 1953. Philosophical investigations. Wiley-Blackwell.
|
| 322 |
+
Chengxu Zhuang, Evelina Fedorenko, and Jacob Andreas. 2024a. Lexicon-level contrastive visual-grounding improves language modeling. In *Findings of the Association for Computational Linguistics* ACL 2024, pages 231-247.
|
| 323 |
+
Chengxu Zhuang, Evelina Fedorenko, and Jacob Andreas. 2024b. Visual grounding helps learn word meanings in low-data regimes. In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1311-1329.
|
| 324 |
+
|
| 325 |
+
# A The Intrinsic Signature
|
| 326 |
+
|
| 327 |
+
We now overview the intrinsic signature in more detail, which was omitted from $\S 3$ .
|
| 328 |
+
|
| 329 |
+
# A.1 An Intrinsic Metric
|
| 330 |
+
|
| 331 |
+
We develop an intrinsic metric, i.e., a metric that does not relay, in expectation, on the true language model $p$ . Thus, we consider the following information-theoretic quantity that resembles Eq. (6), but where the expectation is taken with respect to the model itself:
|
| 332 |
+
|
| 333 |
+
$$
|
| 334 |
+
\sigma_ {\mathrm {I} +} \stackrel {\text {d e f}} {=} - \sum_ {\boldsymbol {c} \in \Sigma^ {*}} \overrightarrow {q _ {\kappa}} (\boldsymbol {c} \mid \boldsymbol {w}) \log \overrightarrow {q} (\boldsymbol {w} \mid \boldsymbol {c}). \tag {1}
|
| 335 |
+
$$
|
| 336 |
+
|
| 337 |
+
In contrast to Chang and Bergen's (2022) distribution signature, Eq. (1) is not grounded in an external language model. Thus, it measures a notion of knowledge internal to the language model itself. We can also, by analogy to Eq. (9), define an intrinsic metric that considers just negative contexts
|
| 338 |
+
|
| 339 |
+
$$
|
| 340 |
+
\sigma_ {\mathrm {I} -} \stackrel {\text {d e f}} {=} - \sum_ {\boldsymbol {c} \in \Sigma^ {*}} \overrightarrow {q} _ {\kappa} ^ {\prime} (\boldsymbol {c} \mid \neg \boldsymbol {w}) \log \overrightarrow {q} (\boldsymbol {w} \mid \boldsymbol {c}), \tag {2}
|
| 341 |
+
$$
|
| 342 |
+
|
| 343 |
+
and one that considers all contexts
|
| 344 |
+
|
| 345 |
+
$$
|
| 346 |
+
\sigma_ {\mathrm {I} \pm} \stackrel {\text {d e f}} {=} - \sum_ {\boldsymbol {c} \in \Sigma^ {*}} \overrightarrow {q _ {\kappa}} (\boldsymbol {c}) \log \overrightarrow {q} (\boldsymbol {w} \mid \boldsymbol {c}). \tag {3}
|
| 347 |
+
$$
|
| 348 |
+
|
| 349 |
+
# A.2 A Practical Estimator
|
| 350 |
+
|
| 351 |
+
We now discuss a scheme to estimate Eq. (1). First, we note that, by Bayes' rule, we have
|
| 352 |
+
|
| 353 |
+
$$
|
| 354 |
+
\overrightarrow {q} _ {\kappa} (\boldsymbol {c} \mid \boldsymbol {w}) = \frac {\overrightarrow {q} (\boldsymbol {w} \mid \boldsymbol {c}) \overrightarrow {q} (\boldsymbol {c})}{\sum_ {\boldsymbol {c} \in \Sigma^ {*}} \overrightarrow {q} (\boldsymbol {w} \mid \boldsymbol {c}) \overrightarrow {q} (\boldsymbol {c})}. \tag {4}
|
| 355 |
+
$$
|
| 356 |
+
|
| 357 |
+
Instead, we consider the following approximation. Given a bag of contexts $\mathcal{C} = \zeta c^{(m)}\mathbb{S}_{m=1}^{M}$ that proceed a word $w$ , we construct the following empirical approximation
|
| 358 |
+
|
| 359 |
+
$$
|
| 360 |
+
\widetilde {q} _ {\kappa} (\boldsymbol {c} \mid \boldsymbol {w}) = \frac {\mathbb {1} \left\{\boldsymbol {c} \in \mathcal {C} \right\} \overrightarrow {q} (\boldsymbol {w} \mid \boldsymbol {c}) \overrightarrow {q _ {\kappa}} (\boldsymbol {c})}{\sum_ {m = 1} ^ {M} \overrightarrow {q} (\boldsymbol {w} \mid \boldsymbol {c} ^ {(m)}) \overrightarrow {q _ {\kappa}} (\boldsymbol {c} ^ {(m)})}. \tag {5}
|
| 361 |
+
$$
|
| 362 |
+
|
| 363 |
+
Plugging Eq. (5) into Eq. (1), we arrive at
|
| 364 |
+
|
| 365 |
+
$$
|
| 366 |
+
\widehat {\sigma} _ {I +} \stackrel {\text {d e f}} {=} - \sum_ {m = 1} ^ {M} \widetilde {q} _ {\kappa} \left(\boldsymbol {c} ^ {(m)} \mid \boldsymbol {w}\right) \log \overrightarrow {q} (\boldsymbol {w} \mid \boldsymbol {c} ^ {(m)}), \tag {6}
|
| 367 |
+
$$
|
| 368 |
+
|
| 369 |
+
In the limiting case, i.e., when $\mathcal{C}$ includes all of $\Sigma^{*}$ , we have $\widehat{\sigma}_{\mathrm{I}+} \to \sigma_{\mathrm{I}+}$ . Note that Eq. (6) is not a standard Monte Carlo estimator as the contexts $c^{(m)}$ may not have been drawn from $\overrightarrow{q_{\kappa}} (\cdot \mid w)$ , but it is still consistent. An analogous estimator can be derived for Eq. (2) and Eq. (3).
|
| 370 |
+
|
| 371 |
+
# B The Signature $\sigma_{\mathrm{R}+}, \sigma_{\mathrm{R}-}$ and $\sigma_{\mathrm{R} \pm}$ are Distance Metrics
|
| 372 |
+
|
| 373 |
+
The reference signatures as introduced in §3 can be easily shown to be distance metrics. Let $x_{c} \stackrel{\mathrm{def}}{=} \log \overrightarrow{p_{\kappa}}(\boldsymbol{c} \mid \boldsymbol{w})$ and $y_{c} \stackrel{\mathrm{def}}{=} \log \overrightarrow{r}(\boldsymbol{w} \mid \boldsymbol{c})$ , we can rewrite the signatures as follows:
|
| 374 |
+
|
| 375 |
+
$$
|
| 376 |
+
\sigma_ {\mathrm {R} +} = \sum_ {\boldsymbol {c} \in \Sigma^ {*}} \overrightarrow {p _ {\kappa}} (\boldsymbol {c} \mid \boldsymbol {w}) | x _ {c} - y _ {c} | \tag {7a}
|
| 377 |
+
$$
|
| 378 |
+
|
| 379 |
+
$$
|
| 380 |
+
\sigma_ {\mathrm {R} -} = \sum_ {\boldsymbol {c} \in \Sigma^ {*}} \overrightarrow {p _ {\kappa}} (\boldsymbol {c} \mid \neg \boldsymbol {w}) | x _ {c} - y _ {c} | \tag {7b}
|
| 381 |
+
$$
|
| 382 |
+
|
| 383 |
+
$$
|
| 384 |
+
\sigma_ {\mathrm {R} \pm} = \sum_ {\boldsymbol {c} \in \Sigma^ {*}} \overrightarrow {p _ {\kappa}} (\boldsymbol {c}) | x _ {c} - y _ {c} |. \tag {7c}
|
| 385 |
+
$$
|
| 386 |
+
|
| 387 |
+
Because $\overrightarrow{p_{\kappa}} (\boldsymbol {c}\mid \boldsymbol {w})$ $\overrightarrow{p_{\kappa}^{\prime}} (\boldsymbol {c}\mid \neg \boldsymbol {w})$ and $\overrightarrow{p_{\kappa}} (\boldsymbol {c})$ are all greater than zero, the expressions above represent weighted Manhattan distances, which is a known distance metric.
|
| 388 |
+
|
| 389 |
+
# C Training Details
|
| 390 |
+
|
| 391 |
+
The training was conducted in parallel across 8 GPUs, with gradient accumulation (Hermans et al., 2017) set to 16 and a batch size per device of 4. As a result, our model was trained with an effective batch size of 512.
|
| 392 |
+
|
| 393 |
+
<table><tr><td>Hyperparameter</td><td>Value</td></tr><tr><td># of heads</td><td>12</td></tr><tr><td># of layers</td><td>12</td></tr><tr><td>learning rate</td><td>7e-4</td></tr><tr><td>learning rate scheduler</td><td>linear</td></tr><tr><td>precision</td><td>fp16</td></tr></table>
|
| 394 |
+
|
| 395 |
+
Table A3: Training Hyperparameters for GPT-2
|
| 396 |
+
|
| 397 |
+
<table><tr><td>Dataset</td><td>42</td><td>123</td><td>28053</td></tr><tr><td>Childes</td><td>2,800</td><td>2,800</td><td>2,600</td></tr><tr><td>BabyLM</td><td>4800</td><td>7200</td><td>6000</td></tr><tr><td>Childes</td><td>30,000</td><td>30,000</td><td>30,000</td></tr></table>
|
| 398 |
+
|
| 399 |
+
Table A4: Final steps for the model trained with seeds 42, 123, and 28053.
|
| 400 |
+
|
| 401 |
+
We saved the checkpoints used for our analysis at increasing intervals throughout the training. Specifically, we saved checkpoints
|
| 402 |
+
|
| 403 |
+
- every 50 steps for integers in $(0,1000]$ ;
|
| 404 |
+
- every 200 steps for integers in (1000, 10000];
|
| 405 |
+
- every 500 steps for integers in (10000, 30000].
|
| 406 |
+
|
| 407 |
+

|
| 408 |
+
Figure A3: Validation losses for models trained on Unified, BabyLM, and CHILDES. The curves show the necessity for an earlier stopping step for seeds 42 (blue), 123 (orange), and 28053 (green).
|
| 409 |
+
|
| 410 |
+

|
| 411 |
+
|
| 412 |
+

|
| 413 |
+
|
| 414 |
+
# D Convergence with the Cauchy Criterion
|
| 415 |
+
|
| 416 |
+
To judge the convergence of learning trajectories that are non-monotonic and may not have a well-formed shape, we develop a novel technique based on the idea of a Cauchy sequence. Let $\sigma(\boldsymbol{w}, t)$ the value that the signature of the word $\boldsymbol{w}$ assumes at time-step $t$ . For a fixed tolerance parameter $\epsilon > 0$ , the age of acquisition $AoA$ is defined as
|
| 417 |
+
|
| 418 |
+
$$
|
| 419 |
+
A o A = \alpha (\sigma , \boldsymbol {w}) = \underset {t \in \{1, \dots , T \}} {\operatorname {a r g m i n}} \left(\underset {s, s ^ {\prime} \in \{t, \dots , T \}} {\max } | \sigma (\boldsymbol {w}, s) - \sigma (\boldsymbol {w}, s ^ {\prime}) | < \epsilon\right). \tag {8}
|
| 420 |
+
$$
|
| 421 |
+
|
| 422 |
+
This definition mirrors the definition of the convergence of a Cauchy sequence, albeit for a finite sequence. However, because $T$ is finite, for small enough $\epsilon$ , we do not, in general, observe true convergence in the analytic sense. Thus, the tolerance parameter $\epsilon$ is best viewed as a hyperparameter, and our findings are dependent on the choice of $\epsilon$ . However, given that nearly all learning algorithms are analyzed by letting
|
| 423 |
+
|
| 424 |
+
$T \to \infty$ , there is a sense in which our definition of AoA is well-founded. Specifically, if we assume the convergence of the learning algorithm as $T \to \infty$ implies the convergence of $\sigma$ , then, for every $\epsilon > 0$ , there exists a number of epochs such that we will achieve $\alpha(\sigma, \boldsymbol{w})$ .
|
| 425 |
+
|
| 426 |
+
Fig. A4 shows the percentage of words that did not reach convergence for various $\epsilon$ values. A word is marked as non-converged if $\sigma$ failed to converge for even a single seed.
|
| 427 |
+
|
| 428 |
+

|
| 429 |
+
|
| 430 |
+

|
| 431 |
+
|
| 432 |
+

|
| 433 |
+
Figure A4: Percentage of words that did not converge across various $\epsilon$ .
|
| 434 |
+
|
| 435 |
+
# E Case Studies
|
| 436 |
+
|
| 437 |
+
Table A5 presents the $AoA$ for each word illustrated in Fig. 1. The $AoA$ values for the signatures were obtained from the model trained on the Unified dataset, using seed 42, and extracted using $\epsilon = 0.07$ .
|
| 438 |
+
|
| 439 |
+
<table><tr><td>Word</td><td>Children</td><td>σ+</td><td>σ-</td><td>σ±</td><td>σI+</td><td>σI-</td><td>σI±</td><td>σR+</td><td>σR-</td><td>σR±</td></tr><tr><td>the</td><td>27.79</td><td>0.45</td><td>0.94</td><td>0.88</td><td>0.49</td><td>0.89</td><td>0.89</td><td>0.47</td><td>0.80</td><td>0.57</td></tr><tr><td>off</td><td>22.77</td><td>0.68</td><td>0.91</td><td>0.82</td><td>0.93</td><td>0.95</td><td>0.96</td><td>0.67</td><td>0.89</td><td>0.67</td></tr><tr><td>water</td><td>20.00</td><td>0.69</td><td>0.91</td><td>0.89</td><td>not conv.</td><td>not conv.</td><td>0.92</td><td>0.68</td><td>0.66</td><td>0.67</td></tr><tr><td>puzzle</td><td>24.79</td><td>0.87</td><td>0.91</td><td>0.92</td><td>0.95</td><td>0.91</td><td>0.95</td><td>0.76</td><td>0.82</td><td>0.64</td></tr><tr><td>good</td><td>24.54</td><td>0.57</td><td>0.88</td><td>0.45</td><td>0.43</td><td>0.67</td><td>0.94</td><td>0.57</td><td>0.83</td><td>0.59</td></tr><tr><td>orange</td><td>23.26</td><td>0.84</td><td>0.92</td><td>0.96</td><td>not conv.</td><td>not conv.</td><td>0.92</td><td>0.80</td><td>0.85</td><td>0.64</td></tr><tr><td>go</td><td>23.33</td><td>0.47</td><td>0.89</td><td>0.59</td><td>0.42</td><td>0.77</td><td>0.92</td><td>0.52</td><td>0.79</td><td>0.54</td></tr><tr><td>climb</td><td>26.04</td><td>0.73</td><td>0.92</td><td>0.93</td><td>0.96</td><td>not conv.</td><td>0.93</td><td>0.75</td><td>0.59</td><td>0.65</td></tr></table>
|
| 440 |
+
|
| 441 |
+
Table A5: ${AoA}$ for words in Fig. 1.
|
| 442 |
+
|
| 443 |
+
# F First and Last Acquired Words
|
| 444 |
+
|
| 445 |
+
Table A6 reports the first 10 and the last 10 words that were acquired according to each signature. The words refer to the model trained on the Unified dataset, using seed 42 and extracted using $\epsilon = 0.07$ .
|
| 446 |
+
|
| 447 |
+
<table><tr><td>σ</td><td>First acquired words</td><td>Last acquired words</td></tr><tr><td>ˆ+</td><td>not, do, you, have, there, can, this, to, that, am</td><td>yes, washing, brush, toy, cow, clock, wash, puzzle, flower, egg</td></tr><tr><td>ˆ-</td><td>all, so, a, he, can, this, there, on, out, for</td><td>red, paint, the, dinner, dry, milk, pretty, feed, cup, blue</td></tr><tr><td>ˆ±</td><td>under, so, like, for, all, a, at, on, out, here</td><td>chocolate, elephant, doll, teacher, truck, gas, washing, kitchen, lips, basket</td></tr><tr><td>ˆI+</td><td>need, can, have, gonna, this, is, you, what, not, wanna</td><td>green, swim, touch, sleep, broken, dirty, present, park, ear, frog</td></tr><tr><td>ˆI-</td><td>have, a, take, that, so, can, here, all, need, now</td><td>rock, cake, money, bread, wash, cup, knife, build, teacher, sky</td></tr><tr><td>ˆI±</td><td>can, am, need, this, a, that, there, stairs, look, first</td><td>clock, bedroom, coat, park, your, thank, sheep, away, walk, rain</td></tr><tr><td>ˆR+</td><td>make, man, not, have, this, here, little, do, to, put</td><td>toy, yes, brush, flower, egg, plate, camera, star, block, washing</td></tr><tr><td>ˆR-</td><td>truck, write, say, block, rock, hard, big, a, friend, knife</td><td>arm, apple, read, present, star, buy, snow, gas, brush, slow</td></tr><tr><td>ˆR±</td><td>a, not, am, man, have, on, can, do, make, so</td><td>yes, apple, train, empty, frog, basket, toy, brush, draw, gonna</td></tr></table>
|
| 448 |
+
|
| 449 |
+
Table A6: First and last acquired words for each signature.
|
| 450 |
+
|
| 451 |
+
# G AoA vs Predictors
|
| 452 |
+
|
| 453 |
+
In the following subsections, we show how log frequency, MLU, number of characters, and concreteness each influence $\sigma$ 's AoA across different datasets. Each AoA value presented in the plots represents the average AoA across multiple seeds, extracted with $\epsilon = 0.07$ . Only words that achieved convergence across all seeds were included in the analysis.
|
| 454 |
+
|
| 455 |
+
# G.1 CHILDES
|
| 456 |
+
|
| 457 |
+

|
| 458 |
+
|
| 459 |
+

|
| 460 |
+
|
| 461 |
+

|
| 462 |
+
|
| 463 |
+

|
| 464 |
+
|
| 465 |
+

|
| 466 |
+
|
| 467 |
+

|
| 468 |
+
$\widehat{\sigma}_{+}$
|
| 469 |
+
|
| 470 |
+

|
| 471 |
+
$\widehat{\sigma}_{-}$
|
| 472 |
+
|
| 473 |
+

|
| 474 |
+
$\widehat{\sigma}_{\pm}$
|
| 475 |
+
|
| 476 |
+

|
| 477 |
+
Children
|
| 478 |
+
Log Freq
|
| 479 |
+
|
| 480 |
+

|
| 481 |
+
|
| 482 |
+

|
| 483 |
+
|
| 484 |
+

|
| 485 |
+
|
| 486 |
+

|
| 487 |
+
MLU
|
| 488 |
+
|
| 489 |
+

|
| 490 |
+
|
| 491 |
+

|
| 492 |
+
|
| 493 |
+

|
| 494 |
+
|
| 495 |
+

|
| 496 |
+
Num Chars
|
| 497 |
+
|
| 498 |
+

|
| 499 |
+
|
| 500 |
+

|
| 501 |
+
|
| 502 |
+

|
| 503 |
+
|
| 504 |
+

|
| 505 |
+
Conc
|
| 506 |
+
|
| 507 |
+

|
| 508 |
+
|
| 509 |
+

|
| 510 |
+
$\widehat{\sigma}_{I+}$
|
| 511 |
+
|
| 512 |
+

|
| 513 |
+
$\widehat{\sigma}_{I - }$
|
| 514 |
+
|
| 515 |
+

|
| 516 |
+
$\widehat{\sigma}_{I\pm}$
|
| 517 |
+
|
| 518 |
+

|
| 519 |
+
Children
|
| 520 |
+
Log Freq
|
| 521 |
+
|
| 522 |
+

|
| 523 |
+
|
| 524 |
+

|
| 525 |
+
|
| 526 |
+

|
| 527 |
+
|
| 528 |
+

|
| 529 |
+
MLU
|
| 530 |
+
|
| 531 |
+

|
| 532 |
+
|
| 533 |
+

|
| 534 |
+
|
| 535 |
+

|
| 536 |
+
|
| 537 |
+

|
| 538 |
+
Num Chars
|
| 539 |
+
|
| 540 |
+

|
| 541 |
+
|
| 542 |
+

|
| 543 |
+
|
| 544 |
+

|
| 545 |
+
|
| 546 |
+

|
| 547 |
+
Conc
|
| 548 |
+
|
| 549 |
+

|
| 550 |
+
|
| 551 |
+

|
| 552 |
+
|
| 553 |
+
# G.3 Unified
|
| 554 |
+
|
| 555 |
+

|
| 556 |
+
|
| 557 |
+

|
| 558 |
+
|
| 559 |
+

|
| 560 |
+
|
| 561 |
+
# H Regression analysis
|
| 562 |
+
|
| 563 |
+
```r
|
| 564 |
+
full <- paste(predictors, collapse = "+" )
|
| 565 |
+
reduced <- paste(original_predictedors[-1], collapse = "+" )
|
| 566 |
+
vif_values <- vif(lm(AoA ~ full, data = data))
|
| 567 |
+
if (max(vif_values) > 5) {
|
| 568 |
+
print("Multicollinearity Detected\n")
|
| 569 |
+
}
|
| 570 |
+
predictors <- c("log_freqency", "concreteness", "n_chars", "mlu", "lexical_class")
|
| 571 |
+
for (i in 1:length(predictors)) {
|
| 572 |
+
formula <- paste("AoA~", predictors[[i]])
|
| 573 |
+
model <- lm(formula, data = data)
|
| 574 |
+
cat(paste(predictors[[i]], "Adjust_Rsquared:", summary(model)\(adj.r.squared))
|
| 575 |
+
}
|
| 576 |
+
m_full <- lm(AoA ~ full, data = data)
|
| 577 |
+
m_reduced <- lm(AoA ~ reduced, data = data)
|
| 578 |
+
cat("Full_model_Adjust_Rsquared:", summary(m_full)\)adj.r.squared)
|
| 579 |
+
cat("Reduced_model_Adjust_Rsquared:", summary(m_reduced)\)adj.r.squared
|
| 580 |
+
```
|
| 581 |
+
|
| 582 |
+
Listing 1: simplified R code for the regression analysis.
|
| 583 |
+
|
| 584 |
+
In the following regressions, we denote log frequency as LF, concreteness as Co, number of characters as NC, mean length of utterances as MLU, and lexical category as LC. No VIF value exceeds 5, indicating that while some multicollinearity may be present, it is not severe.
|
| 585 |
+
|
| 586 |
+
# H.1 Regression on Child Learning Data
|
| 587 |
+
|
| 588 |
+
<table><tr><td>Model</td><td>Predictor</td><td>Estimate</td><td>p-value</td><td>Adj. R2</td></tr><tr><td rowspan="2">LF</td><td>Intercept</td><td>0.723676</td><td>< 2e-16 ***</td><td>0.004</td></tr><tr><td>Log Frequency</td><td>0.011405</td><td>0.162</td><td></td></tr><tr><td rowspan="2">C</td><td>Intercept</td><td>1.003250</td><td>< 2e-16 ***</td><td>0.2575</td></tr><tr><td>Concreteness</td><td>-0.096730</td><td>< 2e-16 ***</td><td></td></tr><tr><td rowspan="2">NC</td><td>Intercept</td><td>0.615790</td><td>< 2e-16 ***</td><td>-0.003</td></tr><tr><td>Number of Characters</td><td>0.004372</td><td>0.661</td><td></td></tr><tr><td rowspan="2">MLU</td><td>Intercept</td><td>0.38715</td><td>1.76e-6 ***</td><td>0.03233</td></tr><tr><td>MLU</td><td>0.03268</td><td>1.7e-3 **</td><td></td></tr><tr><td rowspan="3">LC</td><td>Function Words</td><td>0.75029</td><td>< 2e-16 ***</td><td>0.2012</td></tr><tr><td>Nouns</td><td>-0.2284</td><td>6.67e-12 ***</td><td></td></tr><tr><td>Predicates</td><td>-0.06586</td><td>0.0337 *</td><td></td></tr><tr><td rowspan="7">Full</td><td>Log Frequency</td><td>-0.047086</td><td>6.37e-07 ***</td><td>0.4177</td></tr><tr><td>Number of Characters</td><td>0.025407</td><td>0.00342 **</td><td></td></tr><tr><td>Concreteness</td><td>-0.091746</td><td>3.51e-10 ***</td><td></td></tr><tr><td>MLU</td><td>0.046211</td><td>6.70e-08 ***</td><td></td></tr><tr><td>Function Words</td><td>0.26867</td><td>0.00454 **</td><td></td></tr><tr><td>Nouns</td><td>-0.193857</td><td>1.72e-05 ***</td><td></td></tr><tr><td>Predicates</td><td>-0.084683</td><td>0.01456 *</td><td></td></tr><tr><td rowspan="6">Full \ LF</td><td>Number of Characters</td><td>0.038773</td><td>5.50e-07 ***</td><td>0.3625</td></tr><tr><td>Concreteness</td><td>-0.073379</td><td>4.91e-07 ***</td><td></td></tr><tr><td>MLU</td><td>0.043550</td><td>9.72e-07 ***</td><td></td></tr><tr><td>Function Words</td><td>-0.042864</td><td>5.50e-07 ***</td><td></td></tr><tr><td>Nouns</td><td>-0.115122</td><td>0.0085 **</td><td></td></tr><tr><td>Predicates</td><td>-0.001887</td><td>0.9527</td><td></td></tr></table>
|
| 589 |
+
|
| 590 |
+
Table A7: Children regressions. For each model the Adjusted $R^2$ , the estimate for the predictors, and the p-value for predictor significance.
|
| 591 |
+
|
| 592 |
+
# H.2 Regression on Language Model Learning Data
|
| 593 |
+
|
| 594 |
+
The table below reports the Adjusted $R^2$ values for each predictor across all datasets and signatures, as introduced in App. G. Consistent with previous analyses, the AoA values were computed using a $\epsilon = 0.07$ . The table shows the number of words included in the regression analysis, counting only those that remained after outlier removal and successfully achieving convergence across all three seeds. Among all the signatures, $\widehat{\sigma}_{+}$ is the one that demonstrates the strongest predictive power across nearly all predictors.
|
| 595 |
+
|
| 596 |
+
<table><tr><td>Dataset</td><td>σ</td><td>#words</td><td>LF</td><td>Co</td><td>NC</td><td>MLU</td><td>LC</td><td>Full</td><td>Full \ LF</td></tr><tr><td rowspan="9">CHILDES</td><td>\( \widehat{\sigma}_{+} \)</td><td>222</td><td>0.365</td><td>0.106</td><td>0.065</td><td>0.004</td><td>0.162</td><td>0.376</td><td>0.192</td></tr><tr><td>\( \widehat{\sigma}_{-} \)</td><td>141</td><td>0.108</td><td>0.032</td><td>0.025</td><td>-0.001</td><td>0.037</td><td>0.105</td><td>0.046</td></tr><tr><td>\( \widehat{\sigma}_{\pm} \)</td><td>124</td><td>0.236</td><td>0.074</td><td>0.06</td><td>0.012</td><td>0.081</td><td>0.252</td><td>0.129</td></tr><tr><td>\( \widehat{\sigma}_{\mathrm{I}+} \)</td><td>103</td><td>0.173</td><td>0.110</td><td>0.035</td><td>0.014</td><td>0.068</td><td>0.193</td><td>0.125</td></tr><tr><td>\( \widehat{\sigma}_{\mathrm{I}-} \)</td><td>77</td><td>0.115</td><td>-0.003</td><td>0.039</td><td>0.003</td><td>0.073</td><td>0.132</td><td>0.088</td></tr><tr><td>\( \widehat{\sigma}_{\mathrm{I}\pm} \)</td><td>163</td><td>-0.001</td><td>-0.002</td><td>-0.001</td><td>-0.0</td><td>-0.002</td><td>-0.006</td><td>-0.005</td></tr><tr><td>\( \widehat{\sigma}_{\mathrm{R}+} \)</td><td>210</td><td>0.412</td><td>0.146</td><td>0.069</td><td>0.002</td><td>0.199</td><td>0.422</td><td>0.232</td></tr><tr><td>\( \widehat{\sigma}_{\mathrm{R}-} \)</td><td>229</td><td>0.006</td><td>-0.0</td><td>0.013</td><td>0.006</td><td>0.011</td><td>0.048</td><td>0.028</td></tr><tr><td>\( \widehat{\sigma}_{\mathrm{R}\pm} \)</td><td>228</td><td>0.182</td><td>0.103</td><td>0.078</td><td>0.016</td><td>0.093</td><td>0.216</td><td>0.162</td></tr><tr><td rowspan="9">BabyLM</td><td>\( \widehat{\sigma}_{+} \)</td><td>257</td><td>0.48</td><td>0.168</td><td>0.092</td><td>0.026</td><td>0.217</td><td>0.483</td><td>0.278</td></tr><tr><td>\( \widehat{\sigma}_{-} \)</td><td>243</td><td>0.124</td><td>0.052</td><td>0.029</td><td>0.014</td><td>0.083</td><td>0.125</td><td>0.1</td></tr><tr><td>\( \widehat{\sigma}_{\pm} \)</td><td>162</td><td>0.413</td><td>0.222</td><td>0.091</td><td>-0.001</td><td>0.29</td><td>0.433</td><td>0.323</td></tr><tr><td>\( \widehat{\sigma}_{\mathrm{I}+} \)</td><td>130</td><td>0.383</td><td>0.224</td><td>0.179</td><td>0.107</td><td>0.221</td><td>0.421</td><td>0.363</td></tr><tr><td>\( \widehat{\sigma}_{\mathrm{I}-} \)</td><td>80</td><td>0.288</td><td>0.2</td><td>0.103</td><td>0.027</td><td>0.135</td><td>0.297</td><td>0.229</td></tr><tr><td>\( \widehat{\sigma}_{\mathrm{I}\pm} \)</td><td>209</td><td>0.056</td><td>0.066</td><td>0.016</td><td>0.004</td><td>0.063</td><td>0.085</td><td>0.08</td></tr><tr><td>\( \widehat{\sigma}_{\mathrm{R}+} \)</td><td>253</td><td>0.407</td><td>0.204</td><td>0.082</td><td>0.057</td><td>0.188</td><td>0.425</td><td>0.291</td></tr><tr><td>\( \widehat{\sigma}_{\mathrm{R}-} \)</td><td>254</td><td>0.036</td><td>0.061</td><td>-0.0</td><td>-0.001</td><td>0.072</td><td>0.079</td><td>0.079</td></tr><tr><td>\( \widehat{\sigma}_{\mathrm{R}\pm} \)</td><td>257</td><td>0.087</td><td>0.023</td><td>0.025</td><td>0.011</td><td>0.023</td><td>0.097</td><td>0.06</td></tr><tr><td rowspan="9">Unified</td><td>\( \widehat{\sigma}_{+} \)</td><td>262</td><td>0.614</td><td>0.298</td><td>0.135</td><td>0.072</td><td>0.304</td><td>0.616</td><td>0.392</td></tr><tr><td>\( \widehat{\sigma}_{-} \)</td><td>251</td><td>0.063</td><td>0.012</td><td>0.023</td><td>0.0</td><td>0.047</td><td>0.083</td><td>0.065</td></tr><tr><td>\( \widehat{\sigma}_{\pm} \)</td><td>245</td><td>0.542</td><td>0.265</td><td>0.142</td><td>0.04</td><td>0.294</td><td>0.546</td><td>0.379</td></tr><tr><td>\( \widehat{\sigma}_{\mathrm{I}+} \)</td><td>215</td><td>0.4</td><td>0.274</td><td>0.107</td><td>0.162</td><td>0.201</td><td>0.463</td><td>0.382</td></tr><tr><td>\( \widehat{\sigma}_{\mathrm{I}-} \)</td><td>197</td><td>0.234</td><td>0.168</td><td>0.028</td><td>0.035</td><td>0.126</td><td>0.256</td><td>0.179</td></tr><tr><td>\( \widehat{\sigma}_{\mathrm{I}\pm} \)</td><td>201</td><td>0.052</td><td>0.012</td><td>0.005</td><td>0.006</td><td>0.05</td><td>0.063</td><td>0.05</td></tr><tr><td>\( \widehat{\sigma}_{\mathrm{R}+} \)</td><td>262</td><td>0.572</td><td>0.295</td><td>0.118</td><td>0.088</td><td>0.296</td><td>0.582</td><td>0.377</td></tr><tr><td>\( \widehat{\sigma}_{\mathrm{R}-} \)</td><td>262</td><td>0.013</td><td>0.0</td><td>0.001</td><td>0.007</td><td>0.003</td><td>0.033</td><td>0.013</td></tr><tr><td>\( \widehat{\sigma}_{\mathrm{R}\pm} \)</td><td>256</td><td>0.292</td><td>0.159</td><td>0.083</td><td>0.04</td><td>0.122</td><td>0.3</td><td>0.2</td></tr></table>
|
| 597 |
+
|
| 598 |
+
Table A8: Table reporting the Adj. $R^2$ for the linear models predicting LM's AoA.
|
| 599 |
+
|
| 600 |
+
# I The Impact of Convergence Thresholds on AoA Extraction
|
| 601 |
+
|
| 602 |
+
The Pearson correlations reported in Fig. A5 illustrate how the $AoA$ values extracted using varying $\epsilon$ correlate. This analysis aims to determine whether the choice of $\epsilon$ significantly impacts the results. As discussed in §6.2, with few exceptions, the results across different $\epsilon$ values show high correlations. Therefore, our analysis will remain consistent regardless of the choice of $\epsilon$ .
|
| 603 |
+
|
| 604 |
+

|
| 605 |
+
|
| 606 |
+

|
| 607 |
+
|
| 608 |
+

|
| 609 |
+
Figure A5: For each estimator, we present the Pearson correlation coefficient matrix comparing different $\epsilon$ . Warmer colors indicate stronger positive correlations, cooler colors indicate stronger negative correlations.
|
| 610 |
+
|
| 611 |
+
# J Correlation of Various AoA
|
| 612 |
+
|
| 613 |
+
The figures in this section show how each signature's $AoA$ values correlate with one another across each model trained with different datasets. As a result, the dataset itself influences the correlation patterns among the different signatures. For example, the Unified dataset displays only positive correlations, whereas Childes and BabyLM datasets exhibit negative correlations for $\widehat{\sigma}_{\mathrm{I}\pm}$ and $\widehat{\sigma}_{\mathrm{R}-}$ . The $AoA$ values were extracted using $\epsilon = 0.07$ .
|
| 614 |
+
|
| 615 |
+

|
| 616 |
+
CHILDES
|
| 617 |
+
|
| 618 |
+

|
| 619 |
+
BabyLM
|
| 620 |
+
|
| 621 |
+

|
| 622 |
+
Unified
|
2025/A Distributional Perspective on Word Learning in Neural Language Models/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:cd68572bbccfb635fa92f43c255f6d063dd71e37fb64d6a7b2ac1433e3282593
|
| 3 |
+
size 2567666
|
2025/A Distributional Perspective on Word Learning in Neural Language Models/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2025/A Grounded Typology of Word Classes/a9196626-0ba8-4e82-938e-60cbb4572659_content_list.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2025/A Grounded Typology of Word Classes/a9196626-0ba8-4e82-938e-60cbb4572659_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2025/A Grounded Typology of Word Classes/a9196626-0ba8-4e82-938e-60cbb4572659_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:e2e79287d6183192f2808cedb2a16e075c5bbff43a49fb8e01b14b162dee43a8
|
| 3 |
+
size 3618803
|
2025/A Grounded Typology of Word Classes/full.md
ADDED
|
@@ -0,0 +1,514 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# A Grounded Typology of Word Classes
|
| 2 |
+
|
| 3 |
+
Coleman Haley $^{\text{A}}$ Sharon Goldwater $^{\text{A}}$ Edoardo Ponti $^{\text{A}\bullet}$
|
| 4 |
+
|
| 5 |
+
AUniversity of Edinburgh University of Cambridge
|
| 6 |
+
|
| 7 |
+
{coleman.haley,sgwater,eponti}@ed.ac.uk
|
| 8 |
+
|
| 9 |
+
# Abstract
|
| 10 |
+
|
| 11 |
+
We propose a grounded approach to meaning in language typology. We treat data from perceptual modalities, such as images, as a language-agnostic representation of meaning. Hence, we can quantify the function-form relationship between images and captions across languages. Inspired by information theory, we define "groundedness", an empirical measure of contextual semantic contentfulness (formulated as a difference in surprisal) which can be computed with multilingual multimodal language models. As a proof of concept, we apply this measure to the typology of word classes. Our measure captures the contentfulness asymmetry between functional (grammatical) and lexical (content) classes across languages, but contradicts the view that functional classes do not convey content. Moreover, we find universal trends in the hierarchy of groundedness (e.g., nouns $>$ adjectives $>$ verbs), and show that our measure partly correlates with psycholinguistic concreteness norms in English. We release a dataset of groundedness scores for 30 languages. Our results suggest that the grounded typology approach can provide quantitative evidence about semantic function in language.
|
| 12 |
+
|
| 13 |
+
# 1 Introduction
|
| 14 |
+
|
| 15 |
+
Within linguistics, typology is the subfield focused on the study of patterns and variation across the world's languages (Croft, 2002, pp. 1-2). To identify such patterns, linguists must carefully identify phenomena of interest within languages, and then align them with one another. For example, vowels exist in a continuous acoustic and perceptual space, without clear boundaries between them. To define vowel categories and align systems across languages, linguists rely largely on acoustic properties of the speech signal—reducing the problem to a physically grounded, empirical one (Liljencrants et al., 1972; Cotterell and Eisner, 2017).
|
| 16 |
+
|
| 17 |
+

|
| 18 |
+
Figure 1: Mean and standard deviation of per-language mutual information estimates between word class and image. Across 30 languages, we see clear and consistent tendencies about which parts of speech are more "grounded", corresponding to a distinction between lexical and functional classes.
|
| 19 |
+
|
| 20 |
+
While empirically grounding language form (surface structure like vowels) is typically straightforward, language is not just a formal system, but also a functional one. Many questions within typology relate to the relationship between form and meaning, especially in domains like morphology and syntax. Typically, typologists manually identify semantic/functional roles such as "subject", and "causative" and study their expression across languages (Haspelmath, 2010; Greenberg, 1966). Unlike with many definitions based on form, definitions based on meaning are left up to subjective discretion, leading to debates which reduce to the definition of particular terms cross-linguistically (Haspelmath, 2007, 2012; Plank, 1994).
|
| 21 |
+
|
| 22 |
+
Instead, we propose a "grounded" approach to typology, which (under certain assumptions), allows the quantification and cross-linguistic comparison of language function and semantics across languages. By looking at sentences produced as captions of the same image across languages, we can use the image as an evidence-based, language-agnostic representation of the shared semantics underlying these utterances, similar to the evidence-based acoustic signal in the study of vowel spaces.
|
| 23 |
+
|
| 24 |
+
In this work, we specifically focus on semantic
|
| 25 |
+
|
| 26 |
+
contentfulness—how semantically informative a given word token is. We introduce a way to empirically quantify contentfulness, groundedness, which relies on vision-and-language models. Groundedness measures how much less surprising a word is when we know the perceptual stimuli (i.e., the image) it describes. This surprisal difference between the surprisal of the word token in an image captioning model versus its surprisal in a language model is an estimate of the pointwise mutual information: the greater this difference (LM > captioning), the more grounded the word is in that context.
|
| 27 |
+
|
| 28 |
+
As a case study, we apply this measure to the study of the typology of word classes ("parts of speech"). Literature from cognitive, psycho- and neurolinguistics all point to contentfulness being an organizing factor in word class processing and even formation and structure: low-content (functional) word classes have many different properties from high-content (lexical) classes (Dubé et al., 2014; Bird et al., 2003; Chiarello et al., 1999). Yet, there has been no cross-linguistic study of the relationship between contentfulness and word class.
|
| 29 |
+
|
| 30 |
+
Using our groundedness measure to quantify semantic contentfulness, we can estimate the mutual information of a word class with a caption's meaning (image). We find our measure largely rediscovers the distinction between lexical and functional word classes across 30 languages. Further, though it correlates only weakly with psycholinguistic norms for imageability and concreteness in English, it provides an intuitive ranking (noun $>$ adjectives $>$ verbs) across languages. On the other hand, it contradicts the view of adpositions as a "semi-lexical" class (Corver and Riemsdijk, 2001) and suggests grammatical word classes do carry some semantic content. These results thus partly validate and partly falsify received wisdom about word class contentfulness. They suggest the utility of this measure as a general tool for studying contentfulness in linguistics, and of taking a grounded approach to typological problems. We release the model used to estimate our measure and a dataset of groundedness values in 30 languages. $^{1}$
|
| 31 |
+
|
| 32 |
+
# 2 Background
|
| 33 |
+
|
| 34 |
+
An excellent example of the relevance of the relationship between semantic function and linguistic form to typology is word classes. Within a particular language, there are typically groups of words
|
| 35 |
+
|
| 36 |
+
unified by the (formal) contexts in which they can appear. Further, this distribution of words is not arbitrary, but unified by a particular semantic prototype. For example, in English, nouns are a class of words which prototypically denote physical objects or things and can follow words like "the", "this", and "that". However, not all languages have words like "the", and so an equivalent formal-structural criterion cannot be given (Haspelmath, 2012). On the other hand, semantic criteria are not sufficient to describe these classes: most languages can express prototypical verb or adjective meanings with the syntactic distribution of a noun.
|
| 37 |
+
|
| 38 |
+
The elusiveness of a cross-linguistic definition for word classes leads to many debates about particular languages "having" or "not having" a distinction between (e.g.) nouns and verbs on the basis of a mix of formal and semantic criteria (cf. Kaufman, 2009; Hsieh, 2019; Richards, 2009; Weber, 1983; Floyd, 2011). Here, we investigate word classes as operationalized in a framework where there is a fixed set of universally applicable word classes, as set out in the Universal Dependencies project (de Marneffe et al., 2021). While this is problematic in general, our aim is not to claim that the assignment of word classes is precisely correct, but rather to empirically and quantitatively investigate the functional/semantic dimension of this common operationalisation of word class. In future work, we aim to investigate the relationship between these measures and non-prototypical parts of speech.
|
| 39 |
+
|
| 40 |
+
# 2.1 Contentfulness and word class
|
| 41 |
+
|
| 42 |
+
In this work, we focus on the related distinction between lexical/contentful word classes (e.g. nouns, verbs, and adjectives) and functional/grammatical word classes. Functional word classes are typically closed-class, meaning they do not admit new members and typically do not exhibit rich productive morphology; they tend to express highly grammatical and abstract meanings. Lexical classes are typically open class, productively admitting new members, and their meanings tend to be more concrete and contentful (Corver and Riemsdijk, 2001).
|
| 43 |
+
|
| 44 |
+
Complications about these generalized categories and tendencies abound, however. For example, in some languages like Jaminjung, prototypically lexical categories like verbs are closed class (Schultze-Berndt, 2000; Pawley, 2006). Further, both the abstraction and semantic contentfulness of particular members of a given word class can be quite variable. For example, a noun like "factor"
|
| 45 |
+
|
| 46 |
+
has a highly abstract meaning, while the meaning of the preposition "to" is intuitively more abstract than the preposition "above", despite belonging to the same, "abstract" grammatical word class. Further, over time words can change in both their contentfulness and even word class through processes like grammaticalization (Bisang, 2017).
|
| 47 |
+
|
| 48 |
+
Nevertheless, the complex relationship between contentfulness and word class remains unexplored through a cross-linguistic empirical lens—perhaps due to the difficulties of measuring such properties.
|
| 49 |
+
|
| 50 |
+
# 2.2 Measuring contentfulness
|
| 51 |
+
|
| 52 |
+
The relationship between contentfulness and word class has not been explored cross-linguistically; however, a significant literature within the language sciences has investigated related concepts.
|
| 53 |
+
|
| 54 |
+
While theoretical linguistics has focused on a distinction between content and function words, psycholinguistics has focused on semantic dimensions like imageability, concreteness, and strength of perceptual experience. Measures of these dimensions have relied on subjective, decontextualized human judgements, but nevertheless predict processing differences between word classes, such as asymmetries in the processing of nouns and verbs in certain aphasias (Bird et al., 2003; Dubé et al., 2014; Lin et al., 2022). Because we operationalize meaning as images, notions such as imageability seem especially related to our groundedness measure. However, as discussed in Section 5.4, these concepts differ from our measure in that informativity is not a major factor in their definition. For example, while both "zebra" and "woman" are highly concrete nouns, the former has higher groundedness on average, because although both are often strongly associated with an image, "zebra" is more informative/surprising, especially if the image is unavailable—thus, the image adds more information in that case.
|
| 55 |
+
|
| 56 |
+
As shown by the prior example, our measure is also closely related to another concept widely studied in computational psycholinguistics: surprisal. Like our groundedness measure, surprisal has an intuitive link to contentfulness from an information theoretic perspective, and has been extensively studied in relation to processing difficulty (Hale, 2001; Levy, 2008; Smith and Levy, 2013; Wilcox et al., 2023; Staub, Forthcoming). However, surprisal entangles formal and functional information in language. As such, cross-linguistic comparisons based on surprisal are challenging, since form is
|
| 57 |
+
|
| 58 |
+
language specific (Park et al., 2021). We aim to focus on information due to language function, separated from form. Surprisal must also encode grammatical uncertainty (alternative ways of expressing the same meaning like "knight" and "cavalier"), as opposed to surprisal due only to what meanings are being expressed. Our image captioning model quantifies how many bits of information remain after the meaning is known. Our measure then quantifies how much of the LM surprisal is explained by the meaning (image).
|
| 59 |
+
|
| 60 |
+
# 3 Method
|
| 61 |
+
|
| 62 |
+
In this section, we define a token's groundedness, and show how we can use this to estimate the mutual information between parts of speech and representations of meaning. Let the set of word types in a language be $\mathcal{W}$ . We assume a model of the data generation process where given a meaning $m$ , a sentence is constructed by iteratively sampling a word $w_{t} \in \mathcal{W}$ conditioned on $m$ and previous words $\mathbf{w}_{< t}$ . As mentioned previously, the groundedness of a token is given by its pointwise mutual information (PMI) with the meaning.
|
| 63 |
+
|
| 64 |
+
$$
|
| 65 |
+
\operatorname {P M I} \left(w _ {t}; m \mid \mathbf {w} _ {< t}\right) = \log \frac {p \left(w _ {t} \mid m , \mathbf {w} _ {< t}\right)}{p \left(w _ {t} \mid \mathbf {w} _ {< t}\right)} \tag {1}
|
| 66 |
+
$$
|
| 67 |
+
|
| 68 |
+
As we cannot access the true meaning $m$ , we must approximate it with a proxy. A good proxy for $m$ should be language-neutral, and will make estimating the probabilities in Equation 1 straightforward across languages. In this work, we focus on images as a language-neutral representation of meaning. Images capture rich, language-independent information about the world state described by an image, and have proved useful as a method for aligning meanings across languages (Rajendran et al., 2016; Gella et al., 2017; Mohammadshahi et al., 2019; Wu et al., 2022). Further, a major strength of images as a meaning representation is that estimating both quantities in Equation 1 becomes straightforward with neural models: $p_{\phi}(w_t|m,\mathbf{w}_{< t})$ corresponds to the probability of the token under an image captioning model, while $p_{\theta}(w_t|\mathbf{w}_{< t})$ corresponds to its probability under a language model.
|
| 69 |
+
|
| 70 |
+
Using images as a representation of meaning does have some implications for our approach. For instance, verbs, which usually denote events and are more temporally unstable (Givon, 1984) than other parts of speech, may be less grounded than with a different meaning representation, such as
|
| 71 |
+
|
| 72 |
+
videos. Further, the language of image captions is somewhat restricted in terms of grammatical structure and lexical items, making the analysis of long-tail phenomena or highly abstract language challenging (Ferraro et al., 2015; Alikhani and Stone, 2019). Future work could use our framework to explore other meaning representations, such as symbolic models or videos (though doing so involves overcoming further dataset and modeling challenges). Still, the language-neutral nature and rich information content of images allows us to study groundedness for a wide range of words, languages, and linguistic contexts.
|
| 73 |
+
|
| 74 |
+
Noting that a model's surprisal is negative log probability, we can view groundedness as a difference in surprisal, corresponding to how much more expected the token is under the grounded model than under the textual model. As such, the PMI should rarely take on negative values—because the captioning model has more information (both image and text) than the language model (text only). However, some tokens, such as those that are highly grammatical or structural, should be close to 0.
|
| 75 |
+
|
| 76 |
+
In this work, we study the groundedness of word classes. Drawing inspiration from functionalist typology, we treat a word class $\mathcal{C}_i$ as a label selected by a linguist for a word in its context. We make an assumption that this label is independent of our meaning representation given a word's context, allowing us to define the following joint distribution:
|
| 77 |
+
|
| 78 |
+
$$
|
| 79 |
+
\begin{array}{l} p \left(\mathcal {C} _ {i}, m \mid \mathbf {w} _ {< t}\right) = \\ \sum_ {w _ {t} \in \mathcal {W}} \left[ p \left(\mathcal {C} _ {i} \mid w _ {t}, \mathbf {w} _ {< t}\right) p \left(w _ {t}, m \mid \mathbf {w} _ {< t}\right) \right]. \tag {2} \\ \end{array}
|
| 80 |
+
$$
|
| 81 |
+
|
| 82 |
+
We can then formulate the mutual information between a word class and meaning as the expected value of the PMI between each token labeled with that class, and the token's associated image:
|
| 83 |
+
|
| 84 |
+
$$
|
| 85 |
+
I \left[ \mathcal {C} _ {i}; m | \mathbf {w} _ {< t} \right] = \underset {p \left(\mathcal {C} _ {i}, m, \mathbf {w} _ {< t}\right)} {\mathbb {E}} \left[ \log \frac {p \left(w _ {t} | \mathbf {w} _ {< t} , m\right)}{p \left(w _ {t} | \mathbf {w} _ {< t}\right)} \right]. \tag {3}
|
| 86 |
+
$$
|
| 87 |
+
|
| 88 |
+
Given our factorization of the joint, we can perform a Monte Carlo estimation of the expectation by simply averaging groundedness over all the tokens tagged with $\mathcal{C}_i$ in the data $\mathcal{D}$ :
|
| 89 |
+
|
| 90 |
+
$$
|
| 91 |
+
\begin{array}{l} \hat {I} \left[ \mathcal {C} _ {i}; m \mid \mathbf {w} _ {< t} \right] = \\ \sum_ {(m, \mathbf {w} < t) \in \mathcal {D}} \frac {\mathbb {1} _ {\mathcal {C} _ {w _ {t}} = \mathcal {C} _ {i}} \log \frac {p _ {\phi} (w _ {t} | \mathbf {w} < t , m)}{p _ {\boldsymbol {\theta}} (w _ {t} | \mathbf {w} < t)}}{\sum_ {w _ {t} \in \mathcal {D}} \mathbb {1} _ {\mathcal {C} _ {w _ {t}} = \mathcal {C} _ {i}}} \tag {4} \\ \end{array}
|
| 92 |
+
$$
|
| 93 |
+
|
| 94 |
+
<table><tr><td>Model</td><td>Gemma PT</td><td>PaliGemma CT</td><td>COCO-35L FT</td></tr><tr><td>Img. Cap.</td><td>A</td><td>A</td><td>A</td></tr><tr><td>LM</td><td>A</td><td>A</td><td>A</td></tr></table>
|
| 95 |
+
|
| 96 |
+
Table 1: We match the data points on which the language model and image captioning model were trained. The three datasets are the Gemma pre-training mixture (PT), PaliGemma multimodal data for continued training (CT), and COCO image-caption pairs for fine-tuning (FT). Symbols indicate whether models are trained on text data (A) or on multimodal data (A).
|
| 97 |
+
|
| 98 |
+
where $\mathbb{1}_{\mathcal{C}_{w_t} = \mathcal{C}_i}$ is 1 when a token's class is $C_i$ and 0 otherwise. We note that our groundedness measure and our mutual information estimates are conditional on linguistic context. As such, words which are very grounded in one context could be hardly grounded in another, due to disambiguating information in the preceding context. Some information about $m$ will be generally conveyed by $\mathbf{w}_{< t}$ ; however, our mutual information estimates are aggregated over all contexts in which a word class occurs, and on average this contribution is small.
|
| 99 |
+
|
| 100 |
+
# 4 Experimental setup
|
| 101 |
+
|
| 102 |
+
Captioning model $p_{\phi}(w_t | \mathbf{w}_{<t}, m)$ As our image captioning model, we use the recently released PaliGemma model (Beyer et al., 2024). This model is by far the state-of-the-art among publicly available multilingual image captioning models. PaliGemma consists of an image encoder, initialized from the SigLIP-So400m model (Zhai et al., 2023), and a transformer decoder language model, initialized from the Gemma-2B language model (Gemma, 2024). A linear projection maps from the image encoder space to a sequence of 256 tokens in the language model's embedding space. The whole system is then trained on a mix of vision-and-language datasets, including the unreleased WebLI dataset with 10 billion image-caption pairs in 109 languages (Chen et al., 2023), and the CC3M-35L dataset consisting of 3 million image-caption pairs in each of 35 languages (Thapliyal et al., 2022).
|
| 103 |
+
|
| 104 |
+
While PaliGemma is a general-purpose vision-and-language model, it is designed to be fine-tuned on and applied to individual tasks. As such, we use the open-source paligemma-3b-ft-coco35-224 checkpoint for multilingual captioning, which has been fine-tuned on COCO-35L.
|
| 105 |
+
|
| 106 |
+
Language model $p_{\theta}(w_t \mid \mathbf{w}_{<t})$ Our aim is to use a language model as similar to our captioning
|
| 107 |
+
|
| 108 |
+
model $p_{\phi}(w_t|\mathbf{w}_{<t},m)$ as possible. This is critical to getting good (P)MI estimates, which relies on estimating a difference in surprisal between the two models. If the language model is not adapted to the image captioning domain, it may under-estimate the probability of particular words, leading to an over-estimation of mutual information. We therefore aim to match the training data between the language model and image captioning model, such that they see the same set of captions.
|
| 109 |
+
|
| 110 |
+
To do so, we initialize our language model with the weights from the pretrained PaliGemma model paligemma-3b-pt-224. However, out of the box, the decoder behaves degenerately when no image is provided, so we need to adapt the model to not expect image information and to match the training data of the captioning model. To do so, we fine-tune the language model on the captions only from the COCO-35L dataset. In this way, we ensure the models have observed the same data during training and are adapted to the same domain, and are thus maximally comparable. Table 1 summarizes the data matching between the two models. Further implementational and POS tagging details are in Appendix A.
|
| 111 |
+
|
| 112 |
+
Evaluation Datasets We also need multilingual image captioning datasets for evaluation which are not observed during training. For this, we measure groundedness on three separate datasets, each with its own strengths and weaknesses. First, we use Crossmodal-3600. This dataset includes captions for 3,600 images across a range of cultures, manually captioned by fluent speakers of 36 typologically diverse languages. However, it is relatively small per language compared to other datasets. Further, the independence of the captions means that there is greater diversity in what aspects of an image are being described across languages (Liu et al., 2021; Ye et al., 2024; Berger and Ponti, 2024).
|
| 113 |
+
|
| 114 |
+
Our second dataset, the validation set of COCO-35L, addresses several of these issues. It is larger, with 5 captions each for 5000 images and 35 languages, $^{2}$ yielding 25,000 captions per language. Further, the captions are machine translations of each other, ensuring more comparable semantic content across languages (Beekhuizen et al., 2017) at the expense of centering the perspective of English speakers and machine translation issues.
|
| 115 |
+
|
| 116 |
+
Finally, we consider Multi30K. This dataset
|
| 117 |
+
|
| 118 |
+

|
| 119 |
+
Figure 2: Heatmap of mutual information estimates across parts of speech in thirty languages. Cells show the statistical significance of a word class's groundedness $(\mathrm{MI} > 0)$ . Unattested classes are white. Some functional classes display non-significant levels of groundedness in several languages, while lexical classes dominantly show highly significant grounding.
|
| 120 |
+
|
| 121 |
+
comprises 30,000 images captioned 5 times each in English, with a single caption per image manually translated into French, German, Czech, and Arabic. This dataset is therefore large on the individual language level, but with limited language coverage. It has the comparability of being translated and the trustworthiness of human translation, but may still be vulnerable to translationese. By looking at all three of these datasets for similar generalizations about the relationship between groundedness and part of speech, we obtain a picture that is robust to the weaknesses of the individual datasets.
|
| 122 |
+
|
| 123 |
+
# 5 Results
|
| 124 |
+
|
| 125 |
+
The following sections quantitatively investigate the trends in our groundedness measure across languages and word classes. We begin by examining which word classes exhibit significant groundedness (Section 5.1), followed by an analysis of cross-linguistic trends and their consistency (5.2 and 5.3). Finally, we relate our findings to contentfulness-related psycholinguistic norms (5.4).
|
| 126 |
+
|
| 127 |
+
# 5.1 Which word classes are grounded?
|
| 128 |
+
|
| 129 |
+
We first investigate the evidence for groundedness in each word classs—that is, for each part of speech, we ask whether its estimated mutual information with the image is significantly greater than zero.
|
| 130 |
+
|
| 131 |
+
To compute significance levels, we use a one-sample permutation test. Taking the set of PMIs for a part of speech (POS) in a language, we sample up to 500 PMIs at a time from all datasets and
|
| 132 |
+
|
| 133 |
+

|
| 134 |
+
Figure 3: Word token level distributions of the groundedness measure (PMI) across all languages and datasets, grouped by part of speech (word class). We also report the estimated marginal mean and ranking of each word class. Colors are based on the ranking of classes, rather than their average PMIs. Overall, the distribution and estimated ranking of word classes strongly suggest our groundedness measure quantitatively captures the distinction between lexical and functional classes.
|
| 135 |
+
|
| 136 |
+
randomly permute their signs (assign $+$ or $-$ with equal probability to each PMI value), then average these values to produce a new estimate of mutual information (MI). We repeat this process to produce $10^{5}$ permuted estimates. By measuring how often our estimate based on the observed data is greater than the permuted estimate, we obtain the $p$ -value, $^{3}$ i.e., the probability that our observations would have occurred under the null hypothesis of $\mathrm{MI} = 0$ .
|
| 137 |
+
|
| 138 |
+
Results are shown in Figure 2. Overall, the results suggest most or all word classes contribute some information about the image they describe—in line with theories in linguistics that emphasize the lexical aspects of categories which are traditionally considered functional (Corver and Riemsdijk, 2001; Bisang, 2017). Interestingly, subordinating and coordinating conjunctions do not consistently reject the null hypothesis, suggesting there is little evidence the image is informative for how many clauses a speaker uses to describe an image.
|
| 139 |
+
|
| 140 |
+
# 5.2 Which word classes are more grounded?
|
| 141 |
+
|
| 142 |
+
We hypothesize that the cross-linguistically consistent trends in word class groundedness correspond to a cline which is a continuous analogue of the lexical-functional word class distinction. To isolate the contribution of word class identity to mutual information cross-linguistically, we compute estimated marginal means (EMMs) for each word class's groundedness, and perform a post-hoc pairwise comparison test of the means. The results of this analysis are displayed in Figure 3. All pair
|
| 143 |
+
|
| 144 |
+
wise comparisons except between pronouns and particles are statistically significant, leading to a near total ranking of word classes. We find that lexical word classes (Proper nouns, nouns, adjectives, verbs, numbers, and adverbs) have higher groundedness than functional word classes (particles, auxiliaries, conjunctions, determiners, and adpositions), with pronouns ranking together with particles at the upper end of the functional categories. The ranking corroborates ideas from cognitive linguistics which place nouns, adjectives, and verbs along a lexical-functional continuum, with nouns $>$ adjectives $>$ verbs (Rauhut, 2023). On the other hand, it does not neatly align with ideas in linguistic theory about adpositions as a semi-lexical class (Corver and Riemsdijk, 2001), which suggest they should behave more like other lexical classes compared to functional classes. Instead we see similar or greater mutual information for other functional classes, suggesting they could be more meaning-bearing than traditionally viewed.
|
| 145 |
+
|
| 146 |
+
# 5.3 How consistent is word class groundedness across languages?
|
| 147 |
+
|
| 148 |
+
We quantify the strength of the association between groundedness and word class on two levels: language-level MI estimates (Figure 1), and token-level PMI (Figure 3). The first level quantifies how consistent languages are in the groundedness of word classes, while the second level quantifies how much word class drives the groundedness of individual tokens. In both cases, we use ANOVA to estimate the amount of the variance in groundedness explained by word class.
|
| 149 |
+
|
| 150 |
+

|
| 151 |
+
Figure 4: Correlation between human concreteness ratings and type-level groundedness (PMI; left, $\rho = 0.368$ ) or uncertainty coefficient (right, $\rho = 0.609$ ): i.e., the average ratio between LM surprisal and captioning model surprisal.
|
| 152 |
+
|
| 153 |
+

|
| 154 |
+
|
| 155 |
+
MI estimates For the language-level MI estimates in Figure 1, we consider the separate effects of language, dataset, and POS on groundedness. Because the meanings (images) are matched across languages, this allows us to estimate and control for some languages having consistently larger or smaller MI estimates (due to language-specific variation in our neural estimators). We find significant effects of all 3 factors, but they differ dramatically in how much variation they explain. The effect of dataset is extremely small, explaining $0.5\%$ of the observed variance $(F_{3,816} = 5.71, p < 0.01)$ . Language identity has a larger effect, explaining $8.2\%$ of the variance $(F_{29,789} = 6.42, p < 0.001)$ . However, word class dominates, explaining most of the total variance $(57.3\%, F_{12,806} = 775, p < 0.001)$ , and $62.8\%$ of the remaining variance after controlling for variance due to dataset and language. Altogether, these factors explain $65.6\%$ of the variance, leaving the remaining variance to cross-linguistic differences in the MI of specific parts of speech.
|
| 156 |
+
|
| 157 |
+
PMI distributions We also investigate how much variation in the full distribution of contextual groundedness estimates (PMIs) is explained by word class (shown in Figure 3). Within a POS, groundedness is expected to vary substantially: for example, some (concrete, visually distinct) nouns have much higher PMI with the image than others, and tokens of the same word type also have different groundedness (e.g. "lot" referring to a location vs. "lot" as a quantity expression) Therefore, we expect word class to explain much less variance than in the overall MI estimates. Language, dataset, and their interaction account for $2.4\%$ of the total variation in PMIs across the three datasets $(F_{64,107} = 4727, p < 0.001)$ . Word class accounts for $12.0\%$ of the total variation
|
| 158 |
+
|
| 159 |
+
$(F_{12,10^7} = 123583, p < 0.001)$ . Additionally, the interaction between word class and language (cross-linguistic variation in the means of word classes) accounts for only an additional $1.6\%$ of the total variation $(F_{330,10^7} = 602.5, p < 0.001)$ , despite having many degrees of freedom. So cross-linguistically consistent tendencies comprise the bulk of the explainable variance in the overall PMI distribution across these three datasets—5 times as much as language and dataset, and 7.5 times as much as language differences in POS groundedness.[6]
|
| 160 |
+
|
| 161 |
+
# 5.4 Semantic dimension of the measure
|
| 162 |
+
|
| 163 |
+
In this section we explore the semantic properties of the groundedness measure introduced here, comparing it to semantic norms related to contentfulness that are widely used in psycholinguistics. One potential advantage of our method is the ease with which it allows the rating of individual word tokens in context; however, existing ratings tend to be for words in isolation (word types). We focus our analysis here on English and on word types which occur at least 30 times in the COCO(-35L)<sup>7</sup> validation set, averaging across occurrences to obtain an estimate of the average type-level groundedness.
|
| 164 |
+
|
| 165 |
+
We compare to three different psycholinguistic norms: imageability, concreteness, and strength of visual experience. Such norms are measured by providing a definition and examples of low- and high-value words to raters, who then rate words on a Likert Scale. For imagability, we use the Glasgow Psycholinguistic Norms (Scott et al., 2019). For concreteness, we use the Brysbaert et al. (2014) norms. For strength of visual experience, we use
|
| 166 |
+
|
| 167 |
+
the Lancaster Sensorimotor Norms (Lynott et al., 2020). Results for concreteness are shown in Figure 4 (left). We observe fairly weak (though significant, $p < 0.001$ ) correlations with groundedness using Spearman's $\rho$ (Imageability: $\rho = 0.288$ , Concreteness: $\rho = 0.368$ , Visual strength: $\rho = 0.212$ ).
|
| 168 |
+
|
| 169 |
+
We find these weak correlations are partly due to to the informativity aspect of our measures, which seems not to play as large of a role in human ratings (e.g. woman is just as concrete as skateboard, but less informative and also less grounded by our measure). To account for differences in baseline (LM) word informativity, we can normalize the PMI scores by the LM surprisal, yielding the uncertainty coefficient (Theil, 1970): the proportion of the LM surprisal explained by the PMI. Regressing this value against the psycholinguistic norms, stronger correlations emerge (Imagability: $\rho = 0.548$ , Concreteness: $\rho = 0.609$ as shown in Figure 4 (right), Visual strength: $\rho = 0.320$ ). This suggests that the differences between groundedness and surprisal are associated with concreteness. However, this measure collapses differences between word classes in overall informativity/surprisal.
|
| 170 |
+
|
| 171 |
+
In some cases, outliers are due to contextual effects. For example, in our data the word "polar" (high groundedness, moderate concreteness) occurs exclusively as the first word in the multiword expression "polar bear" which is highly concrete, imageable, and visual; while ratings based on the word type are for the more abstract geographical concept. Other words with divergent scores between human-based and model-based methods tend to be those which frequently occur in contexts where they are highly expected (e.g. "shore" which tends to occur in limited syntactic contexts and after the appearance of words like "boat," "lake," or "surfers"), or words which are often used nonspecifically in the image captioning context (e.g. "photo" exhibits very low PMIs, because captions frequently begin with "A photo of ...").
|
| 172 |
+
|
| 173 |
+
# 6 Discussion and Conclusion
|
| 174 |
+
|
| 175 |
+
We have proposed a grounded approach to typology, using images as a proxy for sentence meaning. Using information theory and neural models, we define groundedness, a measure of a token's association with the meaning expressed in a sentence. Our results demonstrate that word classes display consistent patterns in terms of their groundedness across a typologically diverse sample of languages.
|
| 176 |
+
|
| 177 |
+
We find these patterns can be described as a continuous cline which generalizes the traditionally dichotomous distinction between lexical and functional word classes into a gradient one. However, our results suggest grammatical word classes still carry semantic content. We find that nouns $>$ adjectives $>$ verbs, in line with a view of these classes as a continuum; yet, our results contradict claims that adpositions are more lexical than other functional classes. Our measure is related to surprisal, but diverges from it, particularly for concrete words.
|
| 178 |
+
|
| 179 |
+
While this work has focused on word classes, groundedness enables the exploration of other aspects of how languages express function through form. Future work could investigate in detail under what conditions "functional" items have higher groundedness. For example, do more spatial adpositions and determiners have higher groundedness than less spatial ones? Humans tend to have difficulty scoring highly abstract and grammaticalized words, and getting contextual scores is difficult with existing psycholinguistic approaches: groundedness opens new ways to address these questions.
|
| 180 |
+
|
| 181 |
+
Our approach is also suitable for studying non-prototypical word class organizations, such as languages which do not clearly distinguish between adjectives and verbs (Korean; Maling and Kim, 1998), or languages that split individual word classes into distinct sub-classes (Japanese adjectives; Backhouse, 1984). Future work should look at both formal and semantic sub-classes of parts of speech—such as gerunds, participles, and different semantic classes of verbs (as in VerbNet; Kipper Schuler et al., 2009)—investigating their groundedness and how it aligns with or varies from existing metrics. In particular, we conjecture that boundary classes (e.g. gerunds) may display intermediate groundedness (between nouns and verbs) compared to prototypical members of those classes. Groundedness makes it possible to test this conjecture with reference to the contexts in which words appear, which is needed for distinguishing syncretic forms.
|
| 182 |
+
|
| 183 |
+
Our approach can also cover any classes which can be defined over linguistic units, such as morphemes, phrases, or semantic classes. For instance, future work could explore the claim that inflections are more "grammatical" than derivations (Booij, 2007; Haley et al., 2024). Similarly, our measure could be used to study the lexicalization or grammaticalization of constructions (as a decrease in groundedness over time). To support such work,
|
| 184 |
+
|
| 185 |
+
we release our groundedness scores online.8
|
| 186 |
+
|
| 187 |
+
Going beyond the details of the approach here, our work generally suggests a role for multimodal models in computational typology similar to the one played by language models in the past decade (e.g. Pimentel et al., 2023; Cotterell et al., 2018; Ackerman and Malouf, 2013). While language coverage remains more limited than text models, the latest multimodal models and datasets cover enough typologically and culturally diverse languages to make them worth studying—and we anticipate coverage will only improve. Further, the ability of multimodal models to provide an empirically grounded (if imperfect) representation of meaning makes them uniquely valuable for quantitatively addressing questions about the relation between form and function in language. Our work provides the first study of this kind, and we hope that by demonstrating the utility of this approach and releasing our groundedness scores we will inspire other researchers to follow suit.
|
| 188 |
+
|
| 189 |
+
# Limitations
|
| 190 |
+
|
| 191 |
+
Our approach has a number of important limitations. These limitations should inform the interpretation of results here, as well as any future studies considering using these techniques.
|
| 192 |
+
|
| 193 |
+
First, our operationalisation of meaning as an image is necessarily a simplification and has numerous implications for our results. Notably, the choice of images rather than videos (motivated by model quality and availability) as the representation of meaning has major implications for verbs, which tend to have meanings which are more temporally extended. This choice also has substantial implications about the variety of language which can be analyzed—many types of language use, such as metaphoric extension, are likely to be much less frequent in image captions than in other domains of language use: such phenomena are perhaps best studied using a different technique. This problem is compounded by the fact that existing multilingual corpora for these datasets remain fairly small—thus the analysis of long-tail phenomena in language using these methods is likely not yet possible.
|
| 194 |
+
|
| 195 |
+
Compared to existing methods in typology, this method trades human effort for computational resources. While we make both our models and data available, significantly lessening the burden on future studies, the models here contain between two
|
| 196 |
+
|
| 197 |
+
and three billion parameters, and the image models have very long sequence lengths due to the image tokens. Inference on new data is therefore fairly expensive with current technologies.
|
| 198 |
+
|
| 199 |
+
Further, there remain significant limitations on the languages which can be studied with these approaches. Currently available models cover just 16 languages outside of the Indo-European language family, and entire areal typological regions like the Americas are not covered. We hope that the quality and coverage of these models can continue to improve, and that findings based on current models can be revisited and replicated with newer models.
|
| 200 |
+
|
| 201 |
+
Finally, we rely on automatic part of speech tagging based on Universal Dependencies for the analyses here (see Appendix A for further information and Appendix B for per-language performance). Overall, the accuracy of the Stanza tagger is high for the Universal Dependencies corpora of the languages studied here (96% on average); however, it is not uniformly accurate across languages. Vietnamese has the lowest average accuracy, with 81.5% on their test set; however, our data is different in domain from many of the universal dependencies corpora, so the accuracy might be somewhat lower or higher. Universal Dependencies part of speech tags are not entirely without controversy as well—for instance, some linguists would argue that Korean does not have an adjective class, but UD uses one. It is possible that choices or inconsistencies in the assignment of POS tags according to UD could impact some MI estimates. In summary, noise due to POS tagging may have some influence on the results here, but is unlikely to affect our main conclusions.
|
| 202 |
+
|
| 203 |
+
# Acknowledgments
|
| 204 |
+
|
| 205 |
+
The authors would like to thank Adam Lopez, Nina Gregorio, Burin Naowarat, Yen Meng, Amr Keleg, Oli Liu, Anna Kapron-King, and Kate McCurdy for their feedback on this work. This work was performed using resources provided by Edinburgh International Data Facility (EIDF), Edinburgh Compute and Data Facility (ECDF), and the Data-Driven Innovation Programme at the University of Edinburgh. This work was in part supported by the UKRI Centre for Doctoral Training in Natural Language Processing, funded by the UKRI (grant EP/S022481/1) and the University of Edinburgh, School of Informatics and School of Philosophy, Psychology & Language Sciences.
|
| 206 |
+
|
| 207 |
+
# References
|
| 208 |
+
|
| 209 |
+
Farrell Ackerman and Robert Malouf. 2013. Morphological organization: The low conditional entropy conjecture. Language, 89(3):429-464.
|
| 210 |
+
Malihe Alikhani and Matthew Stone. 2019. "Caption" as a coherence relation: Evidence and implications. In Proceedings of the Second Workshop on Shortcomings in Vision and Language, pages 58-67, Minneapolis, Minnesota. Association for Computational Linguistics.
|
| 211 |
+
A.E. Backhouse. 1984. Have all the adjectives gone? Lingua, 62(3):169-186.
|
| 212 |
+
Barend Beekhuizen, Julia Watson, and Suzanne Stevenson. 2017. Semantic typology and parallel corpora: Something about indefinite pronouns. In 39th Annual Conference of the Cognitive Science Society (CogSci), pages 112-117.
|
| 213 |
+
Yoav Benjamini and Daniel Yekutieli. 2001. The control of the false discovery rate in multiple testing under dependency. The Annals of Statistics, 29(4):1165-1188.
|
| 214 |
+
Uri Berger and Edoardo M. Ponti. 2024. Cross-lingual and cross-cultural variation in image descriptions. Preprint, arXiv:2409.16646.
|
| 215 |
+
Lucas Beyer, Andreas Steiner, André Susano Pinto, Alexander Kolesnikov, Xiao Wang, Daniel Salz, Maxim Neumann, Ibrahim Alabdulmohsin, Michael Tschannen, Emanuele Bugliarello, Thomas Unterthiner, Daniel Keysers, Skanda Koppula, Fangyu Liu, Adam Grycner, Alexey Gritsenko, Neil Houlsby, Manoj Kumar, Keran Rong, Julian Eisenschlos, Rishabh Kabra, Matthias Bauer, Matko Bosnjak, Xi Chen, Matthias Minderer, Paul Voigtlaender, Ioana Bica, Ivana Balazevic, Joan Puigcerver, Pinelopi Papalampidi, Olivier Henaff, Xi Xiong, Radu Soricut, Jeremiah Harmsen, and Xiaohua Zhai. 2024. PaliGemma: A versatile 3B VLM for transfer. Preprint, arXiv:2407.07726.
|
| 216 |
+
Helen Bird, David Howard, and Sue Franklin. 2003. Verbs and nouns: The importance of being imageable. Journal of Neurolinguistics, 16(2):113-149.
|
| 217 |
+
Walter Bisang. 2017. Grammaticalization. In Oxford Research Encyclopedia of Linguistics. Oxford University Press.
|
| 218 |
+
Geert Booij. 2007. Inflection. In Geert Booij, editor, The Grammar of Words: An Introduction to Linguistic Morphology, pages 99-124. Oxford University Press.
|
| 219 |
+
Marc Brysbaert, Amy Beth Warriner, and Victor Kuperman. 2014. Concreteness ratings for 40 thousand generally known English word lemmas. Behavior Research Methods, 46(3):904-911.
|
| 220 |
+
Xi Chen, Xiao Wang, Soravit Changpinyo, A. J. Piergiovanni, Piotr Padlewski, Daniel Salz, Sebastian Goodman, Adam Grycner, Basil Mustafa, Lucas Beyer,
|
| 221 |
+
|
| 222 |
+
Alexander Kolesnikov, Joan Puigcerver, Nan Ding, Keran Rong, Hassan Akbari, Gaurav Mishra, Linting Xue, Ashish V. Thapliyal, James Bradbury, and Weicheng Kuo. 2023. PaLI: A jointly-scaled multilingual language-image model. In The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenReview.net.
|
| 223 |
+
Christine Chiarello, Connie Shears, and Kevin Lund. 1999. Imageability and distributional typicality measures of nouns and verbs in contemporary English. Behavior Research Methods, Instruments, & Computers, 31(4):603-637.
|
| 224 |
+
Norbert Corver and Henk Van Riemsdijk. 2001. Semilexical categories. In Norbert Corver and Henk Van Riemsdijk, editors, Semi-Lexical Categories, pages 1-20. de Gruyter.
|
| 225 |
+
Ryan Cotterell and Jason Eisner. 2017. Probabilistic typology: Deep generative models of vowel inventories. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1182-1192, Vancouver, Canada. Association for Computational Linguistics.
|
| 226 |
+
Ryan Cotterell, Sabrina J. Mielke, Jason Eisner, and Brian Roark. 2018. Are all languages equally hard to language-model? In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 536-541, New Orleans, Louisiana. Association for Computational Linguistics.
|
| 227 |
+
William Croft. 2002. Typology and Universals, 2nd edition. Cambridge University Press.
|
| 228 |
+
Marie-Catherine de Marneffe, Christopher D. Manning, Joakim Nivre, and Daniel Zeman. 2021. Universal Dependencies. Computational Linguistics, 47(2):255-308.
|
| 229 |
+
Catherine Dubé, Laura Monetta, María Macarena Martínez-Cuitiño, and Maximiliano A. Wilson. 2014. Independent effects of imageability and grammatical class in synonym judgement in aphasia. Psicothema, 26(4):449-456.
|
| 230 |
+
Francis Ferraro, Nasrin Mostafazadeh, Ting-Hao Huang, Lucy Vanderwende, Jacob Devlin, Michel Galley, and Margaret Mitchell. 2015. A survey of current datasets for vision and language research. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 207-213, Lisbon, Portugal. Association for Computational Linguistics.
|
| 231 |
+
Simeon Floyd. 2011. Re-discovering the Quechua adjective. Linguistic Typology, 15(1):25-63.
|
| 232 |
+
Spandana Gella, Rico Sennrich, Frank Keller, and Mirella Lapata. 2017. Image pivoting for learning multilingual multimodal representations. In Proceedings of the 2017 Conference on Empirical Methods
|
| 233 |
+
|
| 234 |
+
in Natural Language Processing, pages 2839-2845, Copenhagen, Denmark. Association for Computational Linguistics.
|
| 235 |
+
Team Gemma. 2024. Gemma: Open models based on Gemini research and technology. Preprint, arXiv:2403.08295.
|
| 236 |
+
Talmy Givon. 1984. Syntax: A Functional-Typological Introduction Vol I. Amsterdam: Benjamins.
|
| 237 |
+
Joseph Harold Greenberg, editor. 1966. *Universals of Language*, 2nd edition. Number 37 in The M.I.T. Press Paperback Series. M.I.T Pr, Cambridge, Mass.
|
| 238 |
+
John Hale. 2001. A probabilistic Earley parser as a psycholinguistic model. In Second Meeting of the North American Chapter of the Association for Computational Linguistics.
|
| 239 |
+
Coleman Haley, Edoardo M. Ponti, and Sharon Goldwater. 2024. Corpus-based measures discriminate inflection and derivation cross-linguistically. Journal of Language Modelling, 12(2):477-529.
|
| 240 |
+
Martin Haspelmath. 2007. Pre-established categories don't exist: Consequences for language description and typology. Linguistic Typology, 11(1):119-132.
|
| 241 |
+
Martin Haspelmath. 2010. Comparative concepts and descriptive categories in crosslinguistic studies. Language, 86(3):663-687.
|
| 242 |
+
Martin Haspelmath. 2012. How to compare major word-classes across the world's languages. UCLA Working Papers in Linguistics, 17:109-130.
|
| 243 |
+
Henrison Hsieh. 2019. Distinguishing nouns and verbs: A Tagalog case study. Natural Language & Linguistic Theory, 37(2):523-569.
|
| 244 |
+
Daniel Kaufman. 2009. Austronesian Nominalism and its consequences: A Tagalog case study. Theoretical Linguistics, 35(1):1-49.
|
| 245 |
+
Karin Kipper Schuler, Anna Korhonen, and Susan Brown. 2009. VerbNet overview, extensions, mappings and applications. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, Companion Volume: Tutorial Abstracts, pages 13-14, Boulder, Colorado. Association for Computational Linguistics.
|
| 246 |
+
Roger Levy. 2008. Expectation-based syntactic comprehension. Cognition, 106(3):1126-1177.
|
| 247 |
+
Johan Liljencrants, Björn Lindblom, and Bjorn Lindblom. 1972. Numerical simulation of vowel quality systems: The role of perceptual contrast. Language, 48(4):839-862.
|
| 248 |
+
Kimberly R Lin, Lisa Wisman Weil, Audrey Thurm, Catherine Lord, and Rhiannon J Luyster. 2022. Word imageability is associated with expressive vocabulary in children with autism spectrum disorder. *Autism & Developmental Language Impairments*, 7.
|
| 249 |
+
|
| 250 |
+
Fangyu Liu, Emanuele Bugliarello, Edoardo Maria Ponti, Siva Reddy, Nigel Collier, and Desmond Eliott. 2021. Visually grounded reasoning across languages and cultures. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 10467-10485, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
|
| 251 |
+
Dermot Lynott, Louise Connell, Marc Brysbaert, James Brand, and James Carney. 2020. The Lancaster Sensorimotor Norms: Multidimensional measures of perceptual and action strength for 40,000 English words. Behavior Research Methods, 52(3):1271-1291.
|
| 252 |
+
Joan Maling and So-Won Kim. 1998. Case assignment in the sipta-construction. In Ross King, editor, Description and Explanation in Korean Linguistics. East Asia Program, Cornell University, Ithaca, NY.
|
| 253 |
+
Alireza Mohammadshahi, Rémi Lebret, and Karl Aberer. 2019. Aligning multilingual word embeddings for cross-modal retrieval task. In Proceedings of the Beyond Vision and LANguage: inTEgrating Real-world kKnowledge (LANTERN), pages 11-17, Hong Kong, China. Association for Computational Linguistics.
|
| 254 |
+
Byung-Doh Oh and William Schuler. 2024. Leading whitespaces of language models' subword vocabulary poses a confound for calculating word probabilities. Preprint, arXiv:2406.10851.
|
| 255 |
+
Hyunjii Hayley Park, Katherine J. Zhang, Coleman Haley, Kenneth Steimel, Han Liu, and Lane Schwartz. 2021. Morphology matters: A multilingual language modeling analysis. Transactions of the Association for Computational Linguistics, 9:261-276.
|
| 256 |
+
Andrew K. Pawley. 2006. Where have all the verbs gone? Remarks on the organisation of languages with small, closed verb classes. In 11th Biennial Rice University Linguistics Symposium.
|
| 257 |
+
Tiago Pimentel and Clara Meister. 2024. How to compute the probability of a word. Preprint, arXiv:2406.14561.
|
| 258 |
+
Tiago Pimentel, Clara Meister, Ethan Wilcox, Kyle Mahowald, and Ryan Cotterell. 2023. Revisiting the optimality of word lengths. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 2240-2255, Singapore. Association for Computational Linguistics.
|
| 259 |
+
Frans Plank. 1994. Inflection and derivation. In *The Encyclopedia of Language and Linguistics*, pages 1671-1679. Elsevier Science and Technology, Amsterdam.
|
| 260 |
+
Peng Qi, Yuhao Zhang, Yuhui Zhang, Jason Bolton, and Christopher D. Manning. 2020. Stanza: A Python Natural Language Processing Toolkit for Many Human Languages. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 101-108, Online. Association for Computational Linguistics.
|
| 261 |
+
|
| 262 |
+
Janarthanan Rajendran, Mitesh M. Khapra, Sarath Chandar, and Balaraman Ravindran. 2016. Bridge correlational neural networks for multilingual multimodal representation learning. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 171-181, San Diego, California. Association for Computational Linguistics.
|
| 263 |
+
Alexander Rauhut. 2023. Quantitative Aspects of the Word Class Continuum in English. Ph.D. thesis, Freie Universität Berlin.
|
| 264 |
+
Norvin Richards. 2009. Nouns, verbs, and hidden structure in Tagalog. Theoretical Linguistics, 35(1):139-152.
|
| 265 |
+
Eva Schultze-Berndt. 2000. Simple and Complex Verbs in Jaminjung: A Study of Event Categorisation in an Australian Language. Ph.D. thesis, Radboud University, Nijmegen.
|
| 266 |
+
Graham G. Scott, Anne Keitel, Marc Becirspahic, Bo Yao, and Sara C. Sereno. 2019. The Glasgow Norms: Ratings of 5,500 words on nine scales. Behavior Research Methods, 51(3):1258-1270.
|
| 267 |
+
Nathaniel J. Smith and Roger Levy. 2013. The effect of word predictability on reading time is logarithmic. Cognition, 128(3):302-319.
|
| 268 |
+
Adrian Staub. Forthcoming. Predictability in language comprehension: Prospects and problems for surprisal. Annual Review of Linguistics.
|
| 269 |
+
Ashish V. Thapliyal, Jordi Pont Tuset, Xi Chen, and Radu Soricut. 2022. Crossmodal-3600: A Massively Multilingual Multimodal Evaluation Dataset. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 715-729, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
|
| 270 |
+
Henri Theil. 1970. On the estimation of relationships involving qualitative variables. American Journal of Sociology, 76(1):103-154.
|
| 271 |
+
David John Weber. 1983. A Grammar of Huallaga (Huanuco) Quechua. Ph.D. thesis, University of California, Los Angeles, United States - California.
|
| 272 |
+
Ethan G. Wilcox, Tiago Pimentel, Clara Meister, Ryan Cotterell, and Roger P. Levy. 2023. Testing the predictions of surprisal theory in 11 languages. Transactions of the Association for Computational Linguistics, 11:1451-1470.
|
| 273 |
+
Tianxing Wu, Chaoyu Gao, Lin Li, and Yuxiang Wang. 2022. Leveraging multi-modal information for cross-lingual entity matching across knowledge graphs. Applied Sciences, 12(19).
|
| 274 |
+
Andre Ye, Sebastin Santy, Jena D. Hwang, Amy X. Zhang, and Ranjay Krishna. 2024. Computer vision datasets and models exhibit cultural and linguistic diversity in perception. Preprint, arXiv:2310.14356.
|
| 275 |
+
|
| 276 |
+
X. Zhai, B. Mustafa, A. Kolesnikov, and L. Beyer. 2023. Sigmoid loss for language image pre-training. In 2023 IEEE/CVF International Conference on Computer Vision (ICCV), pages 11941-11952, Los Alamitos, CA, USA. IEEE Computer Society.
|
| 277 |
+
|
| 278 |
+
# A Implementation details
|
| 279 |
+
|
| 280 |
+
# A.1 Part of Speech annotations
|
| 281 |
+
|
| 282 |
+
Note that none of the datasets used here come annotated with word class information. We adopt the Universal Dependencies tagset, using Stanza (Qi et al., 2020, v.1.8.2) to tag words with their Universal Dependencies parts of speech. We remove single orthographic words that Stanza assigns multiple parts of speech, like English "don't" or German "zum" from our analysis, since it is unclear to which part of speech they should be assigned. Stanza does not cover Thai, Maori, Tagalog, Swahili, or Bengali for part of speech tagging, so they are excluded.
|
| 283 |
+
|
| 284 |
+
# A.2 Word-level PMI Estimates
|
| 285 |
+
|
| 286 |
+
Because the tokenizer of the present model does not cross orthographic word boundaries, we are able to sum the log probabilities of their constituent subword tokens to obtain word-level rather than token-level log probability estimates. Ordinarily, some languages do not indicate word boundaries in their orthography, such as Japanese; however, the pretraining data and evaluation datasets (Crossmodal-3600 and COCO-35L) are word-tokenized, so this information is readily available. Further, because our language model uses sub-word tokenization with trailing whitespaces, we adopt the correction proposed by Oh and Schuler (2024); Pimentel and Meister (2024). Specifically, let $\mathbf{s}_{w_t}$ be the decomposition of word $w_t$ into a sequence of subwords, and $\mathbf{s}_{\mathbf{w}_{<t}}$ be the decomposition of context $\mathbf{w}_{<t}$ into a sequence of subwords. Given $S_{\mathrm{bow}}$ , the subset of the tokenizer vocabulary that contains subwords that are beginning-of-word (e.g., with a trailing whitespace):
|
| 287 |
+
|
| 288 |
+
$$
|
| 289 |
+
p \left(w _ {t} \mid \mathbf {w} _ {< t}\right) = p \left(\mathbf {s} _ {w _ {t}} \mid \mathbf {s} _ {\mathbf {w} _ {< t}}\right) \cdot \frac {\sum_ {s \in \mathcal {S} _ {\text {b o w}}} p \left(s \mid \mathbf {s} _ {\mathbf {w} _ {< t}} \odot \mathbf {s} _ {w _ {t}}\right)}{\sum_ {s \in \mathcal {S} _ {\text {b o w}}} p \left(s \mid \mathbf {s} _ {\mathbf {w} _ {< t}}\right)} \tag {5}
|
| 290 |
+
$$
|
| 291 |
+
|
| 292 |
+
where $\odot$ stands for concatenation.
|
| 293 |
+
|
| 294 |
+
# A.3 Training details
|
| 295 |
+
|
| 296 |
+
For training our language model, we did a grid search over learning rates and whether or not to use weight decay. We use a learning rate of $2 \times 10^{-5}$ and weight decay of $1 \times 10^{-6}$ with the Adam optimizer. To train the final model, we train on a single A100 with a batch size of 4 for 430,000 steps on COCO-35L ( $\approx$ 50 hours of training, approximately 3 epochs). Our model achieves much lower perplexity on our evaluation datasets than Gemma-2B, suggesting successful domain adaptation.
|
| 297 |
+
|
| 298 |
+
# B Model performance by language
|
| 299 |
+
|
| 300 |
+
See Table 2 for per-language captioning performance, POS tagging accuracy, and perplexity of the base Gemma-2B model, the PaliGemma captioning model, and our fine-tuned LM.
|
| 301 |
+
|
| 302 |
+
# C Correlation plots for other psycholinguistic norms
|
| 303 |
+
|
| 304 |
+
Figure 5 shows the relationship between our measure and concreteness, as well as the uncertainty coefficient, which normalizes our measure by the language model surprisal. While concreteness is most strongly associated with our measure/its normalized variant, for completeness we show the relationships between our measure and the other psycholinguistic norms (imageability and strength of visual experience) we investigate here.
|
| 305 |
+
|
| 306 |
+

|
| 307 |
+
(a) $\rho = 0.288$
|
| 308 |
+
|
| 309 |
+

|
| 310 |
+
(b) $\rho = 0.548$
|
| 311 |
+
|
| 312 |
+

|
| 313 |
+
(c) $\rho = 0.212$
|
| 314 |
+
|
| 315 |
+

|
| 316 |
+
(d) $\rho = 0.320$
|
| 317 |
+
Figure 5: Correlation between English psycholinguistic norms and type-level groundedness (left) or uncertainty coefficient (right): i.e., the average ratio between LM surprisal and captioning model surprisal. Type-level measures were computed by averaging scores across the COCO-dev dataset for types which occur at least 30 times.
|
| 318 |
+
|
| 319 |
+
<table><tr><td rowspan="2">Language</td><td rowspan="2">ISO 639-1</td><td colspan="2">CIDEr</td><td colspan="3">Perplexity (COCO-35L)</td><td colspan="3">Perplexity (XM3600)</td><td colspan="3">Perplexity (Multi30K)</td><td rowspan="2">Tagging Acc.</td></tr><tr><td>COCO-35L</td><td>XM3600</td><td>Gemma</td><td>PG</td><td>FT-LM</td><td>Gemma</td><td>PG</td><td>FT-LM</td><td>Gemma</td><td>PG</td><td>FT-LM</td></tr><tr><td>Arabic</td><td>ar</td><td>93.73</td><td>33.20</td><td>4.86</td><td>1.48</td><td>3.09</td><td>5.12</td><td>2.87</td><td>4.63</td><td>4.51</td><td>1.94</td><td>3.46</td><td>95.18</td></tr><tr><td>Bengali</td><td>bn</td><td>91.23</td><td>24.07</td><td>2.85</td><td>0.88</td><td>1.61</td><td>2.65</td><td>1.56</td><td>2.16</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>Czech</td><td>cs</td><td>85.57</td><td>30.12</td><td>5.07</td><td>1.40</td><td>3.04</td><td>4.94</td><td>2.45</td><td>4.38</td><td>4.61</td><td>2.24</td><td>4.04</td><td>98.31</td></tr><tr><td>Danish</td><td>da</td><td>117.94</td><td>47.57</td><td>5.79</td><td>1.46</td><td>3.02</td><td>5.74</td><td>2.96</td><td>5.06</td><td>-</td><td>-</td><td>-</td><td>98.30</td></tr><tr><td>German</td><td>de</td><td>93.78</td><td>33.13</td><td>5.23</td><td>1.59</td><td>3.47</td><td>5.50</td><td>3.14</td><td>5.55</td><td>4.73</td><td>2.16</td><td>4.22</td><td>96.96</td></tr><tr><td>Greek</td><td>el</td><td>119.99</td><td>21.90</td><td>3.54</td><td>2.13</td><td>3.55</td><td>3.32</td><td>0.90</td><td>1.75</td><td>-</td><td>-</td><td>-</td><td>97.12</td></tr><tr><td>English</td><td>en</td><td>138.15</td><td>68.30</td><td>4.74</td><td>1.73</td><td>3.62</td><td>4.88</td><td>3.51</td><td>5.72</td><td>4.13</td><td>3.02</td><td>4.79</td><td>97.56</td></tr><tr><td>Spanish</td><td>es</td><td>138.51</td><td>48.69</td><td>4.85</td><td>1.55</td><td>3.36</td><td>5.40</td><td>3.23</td><td>5.51</td><td>-</td><td>-</td><td>-</td><td>98.01</td></tr><tr><td>Persian</td><td>fa</td><td>122.99</td><td>45.62</td><td>4.86</td><td>1.45</td><td>2.88</td><td>4.96</td><td>2.84</td><td>4.47</td><td>-</td><td>-</td><td>-</td><td>97.43</td></tr><tr><td>Finnish</td><td>fi</td><td>35.76</td><td>10.86</td><td>5.31</td><td>1.39</td><td>2.91</td><td>4.95</td><td>2.70</td><td>4.49</td><td>-</td><td>-</td><td>-</td><td>97.20</td></tr><tr><td>French</td><td>fr</td><td>137.79</td><td>53.35</td><td>4.96</td><td>1.44</td><td>3.15</td><td>5.13</td><td>3.12</td><td>5.08</td><td>4.36</td><td>2.73</td><td>4.50</td><td>97.55</td></tr><tr><td>Hebrew</td><td>he</td><td>97.94</td><td>36.59</td><td>4.36</td><td>1.34</td><td>2.71</td><td>3.84</td><td>2.30</td><td>3.74</td><td>-</td><td>-</td><td>-</td><td>90.84</td></tr><tr><td>Hindi</td><td>hi</td><td>104.52</td><td>26.98</td><td>3.75</td><td>1.19</td><td>2.28</td><td>3.86</td><td>2.68</td><td>3.54</td><td>-</td><td>-</td><td>-</td><td>97.95</td></tr><tr><td>Croatian</td><td>hr</td><td>89.42</td><td>25.95</td><td>5.24</td><td>1.37</td><td>2.88</td><td>4.68</td><td>2.49</td><td>4.33</td><td>-</td><td>-</td><td>-</td><td>98.21</td></tr><tr><td>Hungarian</td><td>hu</td><td>78.90</td><td>21.96</td><td>4.94</td><td>1.46</td><td>3.05</td><td>4.88</td><td>2.84</td><td>4.88</td><td>-</td><td>-</td><td>-</td><td>95.80</td></tr><tr><td>Indonesian</td><td>id</td><td>146.38</td><td>37.46</td><td>6.01</td><td>1.63</td><td>3.51</td><td>4.98</td><td>3.16</td><td>5.18</td><td>-</td><td>-</td><td>-</td><td>95.03</td></tr><tr><td>Italian</td><td>it</td><td>131.15</td><td>37.98</td><td>5.21</td><td>1.50</td><td>3.34</td><td>5.44</td><td>3.36</td><td>5.43</td><td>-</td><td>-</td><td>-</td><td>96.98</td></tr><tr><td>Japanese</td><td>ja</td><td>125.07</td><td>35.90</td><td>5.95</td><td>1.34</td><td>2.81</td><td>6.07</td><td>2.60</td><td>4.60</td><td>-</td><td>-</td><td>-</td><td>95.74</td></tr><tr><td>Korean</td><td>ko</td><td>112.40</td><td>42.82</td><td>4.89</td><td>1.29</td><td>2.61</td><td>4.80</td><td>2.37</td><td>3.95</td><td>-</td><td>-</td><td>-</td><td>95.86</td></tr><tr><td>Norwegian</td><td>no</td><td>118.02</td><td>39.67</td><td>6.13</td><td>1.50</td><td>3.07</td><td>5.70</td><td>2.90</td><td>4.75</td><td>-</td><td>-</td><td>-</td><td>98.38</td></tr><tr><td>Dutch</td><td>nl</td><td>114.76</td><td>47.19</td><td>4.96</td><td>1.54</td><td>3.24</td><td>5.34</td><td>3.15</td><td>5.55</td><td>-</td><td>-</td><td>-</td><td>96.71</td></tr><tr><td>Polish</td><td>pl</td><td>86.99</td><td>29.50</td><td>5.10</td><td>1.41</td><td>3.06</td><td>4.70</td><td>2.45</td><td>4.66</td><td>-</td><td>-</td><td>-</td><td>98.80</td></tr><tr><td>Portuguese</td><td>pt</td><td>136.40</td><td>42.76</td><td>5.52</td><td>1.53</td><td>3.30</td><td>5.56</td><td>3.38</td><td>5.49</td><td>-</td><td>-</td><td>-</td><td>97.74</td></tr><tr><td>Romanian</td><td>ro</td><td>118.57</td><td>22.36</td><td>5.15</td><td>1.30</td><td>2.73</td><td>4.62</td><td>2.63</td><td>4.18</td><td>-</td><td>-</td><td>-</td><td>97.98</td></tr><tr><td>Russian</td><td>ru</td><td>98.45</td><td>28.23</td><td>4.67</td><td>1.39</td><td>3.21</td><td>4.21</td><td>2.50</td><td>5.12</td><td>-</td><td>-</td><td>-</td><td>97.34</td></tr><tr><td>Swedish</td><td>sv</td><td>120.08</td><td>45.93</td><td>5.77</td><td>1.51</td><td>3.11</td><td>6.03</td><td>2.99</td><td>5.37</td><td>-</td><td>-</td><td>-</td><td>97.81</td></tr><tr><td>Swahili</td><td>sw</td><td>111.15</td><td>29.45</td><td>5.59</td><td>1.28</td><td>2.57</td><td>5.17</td><td>2.96</td><td>4.10</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>Maori</td><td>mi</td><td>156.26</td><td>40.81</td><td>5.59</td><td>1.07</td><td>2.14</td><td>5.78</td><td>3.12</td><td>3.96</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>Telugu</td><td>te</td><td>76.35</td><td>25.80</td><td>2.93</td><td>0.79</td><td>1.48</td><td>2.98</td><td>1.60</td><td>2.32</td><td>-</td><td>-</td><td>-</td><td>93.97</td></tr><tr><td>Thai</td><td>th</td><td>146.17</td><td>67.49</td><td>4.80</td><td>1.08</td><td>2.00</td><td>4.60</td><td>1.70</td><td>2.90</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>Turkish</td><td>tr</td><td>86.26</td><td>27.58</td><td>6.05</td><td>1.62</td><td>3.42</td><td>5.61</td><td>3.00</td><td>5.00</td><td>-</td><td>-</td><td>-</td><td>95.26</td></tr><tr><td>Ukrainian</td><td>uk</td><td>92.90</td><td>22.47</td><td>4.26</td><td>1.23</td><td>2.67</td><td>4.01</td><td>2.48</td><td>4.38</td><td>-</td><td>-</td><td>-</td><td>97.52</td></tr><tr><td>Vietnamese</td><td>vi</td><td>159.82</td><td>51.57</td><td>4.83</td><td>1.48</td><td>3.02</td><td>4.66</td><td>3.02</td><td>4.86</td><td>-</td><td>-</td><td>-</td><td>81.48</td></tr><tr><td>Chinese</td><td>zh</td><td>103.19</td><td>26.41</td><td>6.01</td><td>1.55</td><td>3.21</td><td>5.86</td><td>3.06</td><td>4.97</td><td>-</td><td>-</td><td>-</td><td>88.82</td></tr></table>
|
| 320 |
+
|
| 321 |
+
Table 2: Per-language performance metrics for the models used. A) CIDEr scores on Crossmodal-3600 (XM3600) and COCO-35L for the paligemma-3b-ft-coco35-224 model. B) Perplexity scores for the base Gemma-2B model (Gemma), PaliGemma (PG) and our finetuned
|
| 322 |
+
PaliGemma-based LM. As expected, PaliGemma has the lowest perplexity, and our fine-tuned model particularly improves perplexity on COCO-35L and for languages with different orthographies. C. Average POS tagging accuracy for the Stanza models on the Universal
|
| 323 |
+
Dependencies treebank test sets for each language.
|
| 324 |
+
|
| 325 |
+
# D Groundedness distribution for Crossmodal-3600
|
| 326 |
+
|
| 327 |
+
Results are ordered by descending mutual information estimate within the dataset (average groundedness/PMI). Hue indicates the average crosslinguistic ranking of a part of speech.
|
| 328 |
+
|
| 329 |
+

|
| 330 |
+
|
| 331 |
+

|
| 332 |
+
|
| 333 |
+

|
| 334 |
+
|
| 335 |
+

|
| 336 |
+
|
| 337 |
+

|
| 338 |
+
|
| 339 |
+

|
| 340 |
+
|
| 341 |
+

|
| 342 |
+
|
| 343 |
+

|
| 344 |
+
|
| 345 |
+

|
| 346 |
+
|
| 347 |
+

|
| 348 |
+
|
| 349 |
+

|
| 350 |
+
|
| 351 |
+

|
| 352 |
+
|
| 353 |
+

|
| 354 |
+
|
| 355 |
+

|
| 356 |
+
|
| 357 |
+

|
| 358 |
+
|
| 359 |
+

|
| 360 |
+
Italian (it)
|
| 361 |
+
|
| 362 |
+

|
| 363 |
+
Japanese (ja)
|
| 364 |
+
|
| 365 |
+

|
| 366 |
+
Korean (ko)
|
| 367 |
+
|
| 368 |
+

|
| 369 |
+
Dutch (nl)
|
| 370 |
+
|
| 371 |
+

|
| 372 |
+
Norwegian (no)
|
| 373 |
+
|
| 374 |
+

|
| 375 |
+
Polish (pl)
|
| 376 |
+
|
| 377 |
+

|
| 378 |
+
Portuguese (pt)
|
| 379 |
+
|
| 380 |
+

|
| 381 |
+
Romanian (ro)
|
| 382 |
+
|
| 383 |
+

|
| 384 |
+
Russian (ru)
|
| 385 |
+
|
| 386 |
+

|
| 387 |
+
Swedish (sv)
|
| 388 |
+
|
| 389 |
+

|
| 390 |
+
Telugu (te)
|
| 391 |
+
Turkish (tr)
|
| 392 |
+
|
| 393 |
+

|
| 394 |
+
|
| 395 |
+

|
| 396 |
+
Ukrainian (uk)
|
| 397 |
+
Vietnamese (vi)
|
| 398 |
+
|
| 399 |
+

|
| 400 |
+
|
| 401 |
+

|
| 402 |
+
Chinese (zh)
|
| 403 |
+
|
| 404 |
+
# E Groundedness distribution for Multi30K
|
| 405 |
+
|
| 406 |
+
Results are ordered by descending mutual information estimate within the dataset (average groundedness/PMI). Hue indicates the average cross-linguistic ranking of a part of speech.
|
| 407 |
+
|
| 408 |
+

|
| 409 |
+
Arabic (ar)
|
| 410 |
+
|
| 411 |
+

|
| 412 |
+
Czech (cs)
|
| 413 |
+
|
| 414 |
+

|
| 415 |
+
German (de)
|
| 416 |
+
|
| 417 |
+

|
| 418 |
+
English (en)
|
| 419 |
+
|
| 420 |
+

|
| 421 |
+
French (fr)
|
| 422 |
+
F Groundness distribution for COCO-35L Development Set
|
| 423 |
+
|
| 424 |
+
Results are ordered by descending mutual information estimate within the dataset (average groundedness/PMI). Hue indicates the average crosslinguistic ranking of a part of speech.
|
| 425 |
+
|
| 426 |
+

|
| 427 |
+
Arabic (ar)
|
| 428 |
+
|
| 429 |
+

|
| 430 |
+
Czech (cs)
|
| 431 |
+
|
| 432 |
+

|
| 433 |
+
Danish (da)
|
| 434 |
+
|
| 435 |
+

|
| 436 |
+
German (de)
|
| 437 |
+
|
| 438 |
+

|
| 439 |
+
Modern Greek (eI)
|
| 440 |
+
|
| 441 |
+

|
| 442 |
+
English (en)
|
| 443 |
+
|
| 444 |
+

|
| 445 |
+
Spanish (es)
|
| 446 |
+
|
| 447 |
+

|
| 448 |
+
Persian (fa)
|
| 449 |
+
|
| 450 |
+

|
| 451 |
+
Finnish (fi)
|
| 452 |
+
|
| 453 |
+

|
| 454 |
+
French (fr)
|
| 455 |
+
|
| 456 |
+

|
| 457 |
+
Hebrew (he)
|
| 458 |
+
|
| 459 |
+

|
| 460 |
+
Hindi (hi)
|
| 461 |
+
|
| 462 |
+

|
| 463 |
+
Croatian (hr)
|
| 464 |
+
|
| 465 |
+

|
| 466 |
+
Hungarian (hu)
|
| 467 |
+
|
| 468 |
+

|
| 469 |
+
Indonesian (id)
|
| 470 |
+
|
| 471 |
+

|
| 472 |
+
Italian (it)
|
| 473 |
+
|
| 474 |
+

|
| 475 |
+
Japanese (ja)
|
| 476 |
+
|
| 477 |
+

|
| 478 |
+
Korean (ko)
|
| 479 |
+
|
| 480 |
+

|
| 481 |
+
Dutch (nl)
|
| 482 |
+
|
| 483 |
+

|
| 484 |
+
Norwegian (no)
|
| 485 |
+
|
| 486 |
+

|
| 487 |
+
Polish (pl)
|
| 488 |
+
|
| 489 |
+

|
| 490 |
+
Portuguese (pt)
|
| 491 |
+
|
| 492 |
+

|
| 493 |
+
Romanian (ro)
|
| 494 |
+
|
| 495 |
+

|
| 496 |
+
Russian (ru)
|
| 497 |
+
|
| 498 |
+

|
| 499 |
+
Swedish (sv)
|
| 500 |
+
|
| 501 |
+

|
| 502 |
+
Telugu (te)
|
| 503 |
+
|
| 504 |
+

|
| 505 |
+
Turkish (tr)
|
| 506 |
+
|
| 507 |
+

|
| 508 |
+
Ukrainian (uk)
|
| 509 |
+
|
| 510 |
+

|
| 511 |
+
Vietnamese (vi)
|
| 512 |
+
|
| 513 |
+

|
| 514 |
+
Chinese (zh)
|
2025/A Grounded Typology of Word Classes/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:3e8dfffe1e336e57e716041cc7553f8bacaae62f278445e19b340ac277dfc1a0
|
| 3 |
+
size 1567139
|
2025/A Grounded Typology of Word Classes/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2025/A Logical Fallacy-Informed Framework for Argument Generation/ef68dfd3-62d8-4d31-a579-f4bb4e064dce_content_list.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2025/A Logical Fallacy-Informed Framework for Argument Generation/ef68dfd3-62d8-4d31-a579-f4bb4e064dce_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2025/A Logical Fallacy-Informed Framework for Argument Generation/ef68dfd3-62d8-4d31-a579-f4bb4e064dce_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:079d66ba936c63efa1e17e36aa885f749caec68cf8137b9a44eeed0a91e4f50e
|
| 3 |
+
size 1791314
|
2025/A Logical Fallacy-Informed Framework for Argument Generation/full.md
ADDED
|
@@ -0,0 +1,505 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# A Logical Fallacy-Informed Framework for Argument Generation
|
| 2 |
+
|
| 3 |
+
Luca Mouchel, Debjit Paul, Shaobo Cui, Robert West, Antoine Bosselut, Boi Faltings EPFL, Switzerland {firstname.lastname}@epfl.ch
|
| 4 |
+
|
| 5 |
+
# Abstract
|
| 6 |
+
|
| 7 |
+
Despite the remarkable performance of large language models (LLMs), they still struggle with generating logically sound arguments, resulting in potential risks such as spreading misinformation. An important factor contributing to LLMs' suboptimal performance in generating coherent arguments is their oversight of logical fallacies. To address this issue, we introduce fallacy-informed preference optimization (FIPO) that helps steer LLMs toward generating logically sound arguments. FIPO includes a classification loss to capture the fine-grained information on fallacy types. Our results on argument generation tasks show that FIPO reduces the fallacy errors by up to $17.5\%$ . Furthermore, our human evaluation results reveal that the quality of the arguments generated by our method significantly outperforms the fine-tuned baselines and other preference optimization methods, such as DPO. These findings highlight the importance of ensuring models are aware of logical fallacies for effective argument generation. $^{1}$
|
| 8 |
+
|
| 9 |
+
# 1 Introduction
|
| 10 |
+
|
| 11 |
+
Argument generation is crucial in daily life and has numerous online and offline applications. For instance, legislative bodies often use persuasive arguments to secure the necessary votes for bills to pass. However, generating logically coherent arguments is a challenging task and requires an appropriate combination of reliable evidence and effective logical reasoning (Walton et al., 2008; Wachsmuth et al., 2017). Humans are prone to misconstruing logical argumentation in the real world and often unknowingly adopt flawed reasoning in discussions (Evans, 2002). Similarly, large language models (LLMs) have demonstrated limitations in their logical reasoning capabilities, suffering from logical
|
| 12 |
+
|
| 13 |
+

|
| 14 |
+
Figure 1: Examples of fallacious and logically sound arguments.
|
| 15 |
+
|
| 16 |
+
inconsistencies (Chen et al., 2023b; Jin et al., 2022; Sourati et al., 2023), and producing logically incorrect arguments (Chen et al., 2023a).
|
| 17 |
+
|
| 18 |
+
In this work, we hypothesize that LLMs generate logically incorrect arguments because they lack an understanding of logical fallacies. A logical fallacy is an error in reasoning that undermines the validity of an argument (Tindale, 2007). For example, "I've never had the flu because I take my vitamins every day." is an instance of a false causality fallacy. These fallacies arise from unsound premises. They can be identified by the absence of legitimate and relevant evidence to support their claims.
|
| 19 |
+
|
| 20 |
+
In a preliminary study, we evaluate 100 arguments on different topics with ChatGPT and find that $21\%$ of the arguments contain fallacies. We observe that several types of logical fallacy arguments, especially false causality and faulty generalization are commonly generated by different LLMs such as Llama-2 (Touvron et al., 2023) or Mistral (Jiang et al., 2023). Our study explores the relationship between logical fallacy understanding and argument generation. We view models generating logically coherent arguments as a logical alignment problem, i.e., aligning the model responses (arguments) to the given topic and stance. Recent methods, such as Reinforcement Learning with Human or AI Feed
|
| 21 |
+
|
| 22 |
+
back, have been shown to bridge the "alignment gap" between model responses and human preferences (Christiano et al., 2017; Ziegler et al., 2019; Lee et al., 2023). However, LLMs fine-tuned with RLHF can still generate logically fallacious arguments. Hence, to train models to prefer logically correct arguments, it is important to have reliable and diverse error scenarios as training examples. To address this issue, we define 13 categories of logical fallacy errors, drawing inspirations from the history of logic and logical fallacies studied since the times of Ancient Greece by Aristotle (Aristotle, 2006). Figure 1 depicts some fallacy examples, and Figure 3 shows different fallacy categories.
|
| 23 |
+
|
| 24 |
+
We use ChatGPT to collect 7,872 fallacy arguments spanning different fallacy categories to train preference models. First, we perform supervised fine-tuning (SFT) to teach models to generate arguments. Next, we use preference optimization methods to instil the ability to generate logically correct arguments. Specifically, we use Direct Preference Optimization (DPO) (Rafailov et al., 2023), Proximal Policy Optimization (PPO) (Schulman et al., 2017), Kahneman-Tversky Optimization (KTO) (Ethayarajh et al., 2024), and Contrastive Preference Optimization (CPO) (Xu et al., 2024) on our preference dataset. These fallacy-informed models demonstrate a notable improvement in argument quality, achieving a higher win-rate (i.e., the proportion of wins over the SFT baseline in terms of argument quality) and reducing the fallacy-rate (i.e., the proportion of fallacies generated) by up to $8.5\%$ . However, we observe that the above methods fail to account for the nuanced differences between logical fallacies, instead treating each fallacy similarly.
|
| 25 |
+
|
| 26 |
+
To this end, we introduce Fallacy-Informed Preference Optimization (FIPO) that combines the original preference optimization loss with a weighted cross-entropy classification loss. This additional loss penalizes the model based on the frequency of different fallacies in the preference dataset, applying stronger penalties for misclassifying more occurring fallacies during training and reinforcing fallacy-aware learning. We observe that FIPO outperforms the SFT baselines by reducing the fallacy-rate from $34.5\%$ to $17\%$ for Llama-2 (7B) and from $32.5\%$ to $19.5\%$ for Mistral (7B). FIPO also outperforms the best preference optimization method (PPO-Llama 2 and KTO-Mistral) by producing $9\%$ and $8.25\%$ fewer logical fallacy errors, respectively. Our analysis explores how preference optimization enhances argument quality
|
| 27 |
+
|
| 28 |
+
and compares FIPO to other methods. We also investigate whether preference optimization reduces logical fallacy errors and how FIPO improves upon existing techniques. Additionally, we examine the most common fallacy types observed in arguments generated during inference.
|
| 29 |
+
|
| 30 |
+
Contributions: (i) To the best of our knowledge, we are the first to study and show how understanding logical fallacies can improve argument generation quality; (ii) We introduce FIPO, which integrates a classification loss during the preference optimization phase, which helps further reduce the fallacy-rate; (iii) Human evaluation results validate GPT-4's reliability in identifying fallacies and FIPO's higher-quality arguments compared to other preference learning methods.
|
| 31 |
+
|
| 32 |
+
# 2 Related Work
|
| 33 |
+
|
| 34 |
+
Logical Fallacies. Logical fallacies are errors in reasoning that can undermine the validity of an argument (Tindale, 2007). In argumentative discourse, identifying fallacies is crucial for measuring the quality of argumentation (Wachsmuth et al., 2017; Nakpih and Santini, 2020). Prior works have shown that LLMs struggle to classify logical fallacies, with F1 scores reaching $66\%$ (Jin et al., 2022; Sourati et al., 2023; Ruiz-Dolz and Lawrence, 2023). More recently, Li et al. (2024) demonstrated GPT-4's ability to identify and classify fallacies, achieving over $86\%$ accuracy in both tasks. Nevertheless, previous works have not explored how a nuanced understanding of logical fallacies might influence argument generation.
|
| 35 |
+
|
| 36 |
+
Argument Generation. Argument generation is an important task in natural language processing that involves generating coherent and persuasive arguments for a given topic. Existing argument generation frameworks have made significant strides: Hua and Wang (2018) introduced a generator that creates arguments from key phrases, followed by a separate decoder to produce the final argument text. Hua et al. (2019) developed Candela, a style-controlling counter-argument generation framework. Schiller et al. (2021) presented Arg-CTRL, a model that uses control codes for topic, stance, and aspect in sentence-level argument generation. More recently, Saha and Srihari (2023) introduced an argument generator for factual arguments across a limited set of topics. Despite these advances, no study has yet addressed generating arguments from the lens of logical fallacies.
|
| 37 |
+
|
| 38 |
+

|
| 39 |
+
Figure 2: Overview of our framework. The first step is supervised fine-tuning using argumentation data. Next, we collect preference data by generating fallacious arguments using ChatGPT. We then perform preference optimization using methods like DPO, PPO, CPO, and KTO. Finally, we introduce FIPO, which integrates a classification loss during the preference optimization phase.
|
| 40 |
+
|
| 41 |
+
Data Generation and Automatic Evaluation with LLMs. Using LLMs in data generation is supported by their proven effectiveness in a spectrum of text generation tasks, including the creation of instructional and relational datasets (Peng et al., 2023; Sun et al., 2023; Wang et al., 2023; Shao et al., 2023). Notably, Schick and Schütze (2021) demonstrated the utility of LLMs in producing datasets that significantly enhance the training of smaller models. Regarding the evaluation of automatically generated text, Liu et al. (2023) highlight that traditional metrics such as BLEU (Papineni et al., 2002) and ROUGE (Lin, 2004) are inadequate for tasks requiring creativity and diversity. Given Li et al. (2024)'s demonstration of GPT-4's ability to identify and classify logical fallacies, we use it as a judge to identify fallacies in arguments-validated by a human annotation task we performed to verify GPT-4's reliability (Section 5.2).
|
| 42 |
+
|
| 43 |
+
# 3 Task Formulation
|
| 44 |
+
|
| 45 |
+
In this work, we address the argument generation task. In order to assess LLMs' capabilities for argument generation, we leverage the EXPLAGRAPHS dataset (Saha et al., 2021), consisting of topics, stances and arguments, denoted as $\mathcal{D} = \{t^{(i)}, s^{(i)}, y_w^{(i)}\}_{i=1}^N$ , where $t$ is the topic, $s$ the stance (supporting or counter), and $y_w$ the argument. One naive approach to address the problem of logical argument generation is prompting LLMs. To assess this approach, we evaluate ChatGPT (gpt-3.5-turbo), Llama-2 (7B), and Mistral (7B) in the zero-shot setting on a set of 100 topics. Additionally, we implement a Retrieval Augmented Generation (RAG) model with Llama-2 using the
|
| 46 |
+
|
| 47 |
+
wiki-dpr database (Karpukhin et al., 2020). Examples of contexts retrieved for RAG are provided in Appendix C.2. This baseline evaluation is made on two separate scenarios: in $\mathbf{S}_1$ , we prompt models to generate arguments given a topic and a stance. In $\mathbf{S}_2$ , we guide the model towards generating logical arguments by defining logical fallacies, giving two examples, and instructing them not to generate a fallacious argument. We observe models struggle to generate logically sound arguments in $\mathbf{S}_1$ . In Tab. 1, we report the performance of all the models in argument generation. Since GPT-4 (gpt-4 on OpenAI's API) is a good fallacy identifier (Li et al., 2024), further validated by our own human annotation task to verify reliability (Section 5.2), we use it to assess the fallacy-rate of the generated arguments. In $\mathbf{S}_1$ , ChatGPT outperformed the open source models—however, it still generates fallacious arguments in $21\%$ of the cases. We notice a very sharp improvement in $\mathbf{S}_2$ , implying explicit knowledge of fallacies and examples in prompts help generate logical arguments. While we include $\mathbf{S}_2$ as a baseline to assess the impact of explicit guidance, our study examines LLMs' inherent ability to generate logical arguments without assistance—mirroring the conditions of $\mathbf{S}_1$ . The detailed distributions of fallacy types across different approaches in the zero-shot setting are presented in Tab. 7.
|
| 48 |
+
|
| 49 |
+
<table><tr><td>Model</td><td>ChatGPT</td><td>Llama-2</td><td>Mistral</td><td>Llama-2-RAG</td></tr><tr><td>fallacy-rate S1</td><td>21</td><td>55</td><td>38</td><td>37</td></tr><tr><td>fallacy-rate S2</td><td>14</td><td>21</td><td>18</td><td>19</td></tr></table>
|
| 50 |
+
|
| 51 |
+
Table 1: fallacy-rate for arguments generated by different baselines.
|
| 52 |
+
|
| 53 |
+
# 4 Methodology
|
| 54 |
+
|
| 55 |
+
To address the challenge of generating fallacy-free arguments, we propose using preference learning methods to generate arguments logically aligned with the given topic and stance. This approach involves making models aware of logical fallacies and training them to generate logically correct arguments by rewarding valid arguments and penalizing dispreferred samples. The process of preference learning typically involves three main steps: (i) supervised fine-tuning (SFT) (Section 4.1), (ii) preference data collection (Section 4.2) and (iii) reinforcement learning (Section 4.3). In Section 4.4, we introduce our method (FIPO), which introduces fine-grained information about fallacies in the alignment process. A comprehensive overview of the methodology is presented in Fig. 2. To justify our design and methodology for FIPO and the preference data collection, we perform two ablation studies with different training approaches, described in Section 5.3. The results demonstrate that our design achieves the best performance.
|
| 56 |
+
|
| 57 |
+
# 4.1 Supervised Fine-Tuning
|
| 58 |
+
|
| 59 |
+
We fine-tune a pretrained language model $\pi_{\beta}$ on the EXPLAGRAPHs (Saha et al., 2021) dataset $\mathcal{D}$ with maximum likelihood estimation to obtain $\pi_{\mathrm{SFT}}$ .
|
| 60 |
+
|
| 61 |
+
$$
|
| 62 |
+
\mathcal {L} _ {\mathrm {S F T}} \left(\pi_ {\beta}\right) = - \mathbb {E} _ {(t, s, y _ {w}) \sim \mathcal {D}} \left[ \log \left(\pi_ {\beta} \left(y _ {w} \mid t, s\right)\right) \right] \tag {1}
|
| 63 |
+
$$
|
| 64 |
+
|
| 65 |
+
# 4.2 Preference Data Collection
|
| 66 |
+
|
| 67 |
+
Conventionally, after the SFT phase, $\pi_{\mathrm{SFT}}$ is prompted with input $x$ to produce pairs of outputs $(y_1, y_2) \sim \pi_{\mathrm{SFT}}(y|x)$ , which are then presented to human annotators to rank as preferred and dispreferred responses. Our objective is to reduce logical fallacy errors in the model's outputs, therefore, including a diverse range of fallacy types in the preference data is essential, as these may not be sufficiently represented in the model's outputs. We define 13 categories of logical fallacy errors (see Fig. 3). However, there are two key challenges: (i) determining the appropriate distribution of logical fallacy errors in the preference data, and (ii) automatically collecting such fallacy arguments. To address the first concern, we leverage the LOGIC dataset (Jin et al., 2022), which was carefully curated through extensive web crawling and data collection from diverse online sources. This dataset reflects the distribution of fallacies in real-world scenarios, providing a realistic foundation for mitigating fallacies in everyday argumentative
|
| 68 |
+
|
| 69 |
+

|
| 70 |
+
Figure 3: Distribution of different fallacy types according to the LOGIC dataset (Jin et al., 2022), based on which we build our preference dataset.
|
| 71 |
+
|
| 72 |
+
discourse. This data consists of labelled logical fallacies, which we use only as examples when addressing the second concern—generating synthetic fallacious arguments given topics. Using the EXPLAGRAPHS dataset defined as $\mathcal{D}$ in Section 3—consisting of topics, stances and arguments—we use ChatGPT (gpt-3.5-turbo) to build preference pairs by generating an equivalent fallacious argument $y_{l}$ for each valid argument $y_{w}$ in $\mathcal{D}$ . To ensure arguments generated by ChatGPT are indeed fallacies, we provide a definition of the specific fallacy being generated and examples of that fallacy type from the LOGIC dataset. To populate our preference dataset and ensure it spans across the most types of fallacies, we generate four fallacious arguments with different fallacy types sampled from the distribution in Fig. 3 for each $y_{w}$ . The original dataset $\mathcal{D} = \{t^{(i)}, s^{(i)}, y_{w}^{(i)}\}_{i=1}^{N}$ containing the topic, stance and argument is now augmented with fallacies and their labels, denoted as $y_{l}$ and $k$ respectively. We define the preference dataset as $\mathcal{D}' = \{t^{(i)}, s^{(i)}, y_{w}^{(i)}, y_{l}^{(i)}, k^{(i)}\}_{i=1}^{M}$ , where we have $M$ pairs of preferred $(y_{w})$ and dispreferred $(y_{l})$ samples, with $k^{(i)}$ the fallacy type of the dispreferred argument $y_{l}^{(i)}$ . The test set is not augmented with fallacies, as we use only the topics and stances at inference time to evaluate the quality and logical soundness of the arguments generated. More details on the generations and our prompt designs for ChatGPT are presented in Appendix A.1. Finally, our augmented fallacy argument dataset consists of the train-test split shown in Tab. 2.
|
| 73 |
+
|
| 74 |
+
<table><tr><td></td><td># Train</td><td># Test</td></tr><tr><td>EXPLAGRAPHS data (Saha et al., 2021)</td><td>1,968</td><td>400</td></tr><tr><td>Generated Fallacies</td><td>7,872</td><td>-</td></tr><tr><td>Total</td><td>7,872</td><td>400</td></tr></table>
|
| 75 |
+
|
| 76 |
+
Table 2: Train-Test split of our preference dataset.
|
| 77 |
+
|
| 78 |
+
# 4.3 Preference Learning Phase
|
| 79 |
+
|
| 80 |
+
In this work, we use four preference learning algorithms: PPO, DPO, KTO, and CPO. Among these, only PPO requires explicit feedback from a reward model. For the other methods, we apply the preference optimization using $\pi_{\mathrm{SFT}}$ as a reference model and the preference data $\mathcal{D}'$ .
|
| 81 |
+
|
| 82 |
+
Explicit Reward Modelling. We use the dataset $\mathcal{D}'$ to train the Electra model (Clark et al., 2020) to learn to predict reward values.
|
| 83 |
+
|
| 84 |
+
Implicit Reward Modeling. Methods like DPO, KTO, and CPO employ contrastive loss to derive implicit rewards from preference datasets. Note that CPO (Xu et al., 2024) is a reference-free method that does not require a reference policy. The different methods are detailed in Appendix B.
|
| 85 |
+
|
| 86 |
+
# 4.4 Fallacy-Informed Preference Optimization (FIPO)
|
| 87 |
+
|
| 88 |
+
Despite the preference optimization, models persistently generate specific types of logical fallacies, particularly faulty generalization and false causality arguments (Tab. 5). This can be attributed to the fact that the models do not explicitly learn about the fallacy types. Hence, we propose FIPO, which uses a classification head attached to the generative model to calculate a weighted cross-entropy loss for the preferred and dispreferred samples. Recall the preference dataset $\mathcal{D}' = \{t^{(i)}, s^{(i)}, y_w^{(i)}, y_l^{(i)}, k^{(i)}\}_{i=1}^M$ where $k \in [1,13]$ is the fallacy type of $y_l$ . We also label the preferred samples $y_w$ as 'Not a Fallacy' ( $k = 0$ ). Secondly, after a forward pass through the language model $\pi_\theta$ , we extract the hidden state for the last token from the last hidden layer, defined as:
|
| 89 |
+
|
| 90 |
+
$$
|
| 91 |
+
\mathbf {h} _ {\theta} (y | t, s) := \pi_ {\theta} (y | t, s) ^ {L, T} \tag {2}
|
| 92 |
+
$$
|
| 93 |
+
|
| 94 |
+
where $L$ represents the total number of layers in the base model and $T$ denotes the position of the last token. This hidden state is fed into the classification head using a linear layer, and the resulting output defines the probability for fallacy type $k$ :
|
| 95 |
+
|
| 96 |
+
$$
|
| 97 |
+
\mathbb {P} _ {\mathbf {h} _ {\theta}} ^ {k} (y | t, s) = \operatorname {S o f t m a x} (\mathbf {W h} _ {\theta} (y | t, s) + \mathbf {b}) _ {k} \quad (3)
|
| 98 |
+
$$
|
| 99 |
+
|
| 100 |
+
where $\mathbf{W}$ is the linear layer's weight matrix, and $\mathbf{b}$ is the corresponding bias term. To avoid penalizing the model equally for misclassifying different types of fallacies, we propose to guide the model to prioritize the most frequent fallacy types. This approach ensures that the model accurately identifies
|
| 101 |
+
|
| 102 |
+
the most occurring fallacies. We define weights $w_{k}$ for each fallacy type $k$ as its frequency in $\mathcal{D}'$ , and $w_{0}$ as the minimum value of these frequencies, which is designed to let the model focus more on the fallacies in the less preferred samples rather than the non-fallacy samples during the preference optimization process: $w_{k} = \frac{1}{M}\sum_{i = 1}^{M}\mathbb{1}\{k^{(i)} = k\}$ and $w_{0} = \min_{k}w_{k}$ where $\mathbb{1}$ is the indicator function. Using these weights, and the definitions of Equation 2 and 3, we define the fallacy-informed classification loss as a weighted cross-entropy loss:
|
| 103 |
+
|
| 104 |
+
$$
|
| 105 |
+
\mathcal {L} _ {\mathrm {C L F}} \left(\pi_ {\theta}\right) = - \mathbb {E} _ {(t, s, y _ {w}, y _ {l}, k) \sim \mathcal {D} ^ {\prime}}
|
| 106 |
+
$$
|
| 107 |
+
|
| 108 |
+
$$
|
| 109 |
+
\left[ w _ {0} \log \mathbb {P} _ {\mathbf {h} _ {\theta}} ^ {0} (y _ {w} | t, s) + w _ {k} \log \mathbb {P} _ {\mathbf {h} _ {\theta}} ^ {k} (y _ {l} | t, s) \right] \quad (4)
|
| 110 |
+
$$
|
| 111 |
+
|
| 112 |
+
The resulting loss function, termed Fallacy Informed Preference Optimization loss, combines the loss from the preference optimization with our classification loss $(\mathcal{L}_{\mathrm{CLF}})$ . In our work, CPO (Xu et al., 2024) is the method with which we combine our loss since it has the best trade-off between win-rate and fallacy-rate (see Fig. 5 & 6, Tab. 5). The resulting loss is:
|
| 113 |
+
|
| 114 |
+
$$
|
| 115 |
+
\mathcal {L} _ {\mathrm {F I P O}} \left(\pi_ {\theta}\right) = \mathcal {L} _ {\mathrm {C P O}} \left(\pi_ {\theta}\right) + \lambda \mathcal {L} _ {\mathrm {C L F}} \left(\pi_ {\theta}\right) \tag {5}
|
| 116 |
+
$$
|
| 117 |
+
|
| 118 |
+
where $\lambda$ is a weighting parameter to adjust the fallacy-informed loss with respect to the preference optimization loss. A more detailed description of $\mathcal{L}_{\mathrm{FIPO}}$ is described in Appendix F.
|
| 119 |
+
|
| 120 |
+
# 5 Experimental Setup
|
| 121 |
+
|
| 122 |
+
We denote the policies obtained after the SFT phase and the alignment phase as $\pi_{\mathrm{SFT}}$ and $\pi_{\theta}$ , respectively. The policy $\pi_{\theta}$ is aligned using one of the following methods: PPO (Schulman et al., 2017), DPO (Rafailov et al., 2023), CPO (Xu et al., 2024), KTO (Ethayarajh et al., 2024), and FIPO.
|
| 123 |
+
|
| 124 |
+
# 5.1 Datasets and Base Models
|
| 125 |
+
|
| 126 |
+
Datasets. We evaluate argument generation based on the EXPLAGRAPHs dataset (Saha et al., 2021) where samples contain a Topic, a Stance, and short Arguments (5-20 words), spanning a wide range of topics. We augment this dataset by generating equivalent short arguments in the form of fallacies using ChatGPT as described in Section 4.2 and illustrated in Fig. 2. The LOGIC dataset also contains short-length fallacies. Based on the length of arguments and fallacies, our study focuses only on short argumentative texts. The final dataset size
|
| 127 |
+
|
| 128 |
+
is provided in Tab. 2. We also perform out-of-domain analysis on a subset of samples from the Debatepedia dataset (Cabrio and Villata, 2012).
|
| 129 |
+
|
| 130 |
+
Base Models. We use Llama-2 (7B) (Touvron et al., 2023) and Mistral (7B) (Jiang et al., 2023) as our base models. For each alignment method, we leverage Low-Rank Adaptation (LoRA) (Hu et al., 2021). This drastically reduces the number of parameters that need to be fine-tuned, from 7B to $8.3\mathrm{M}(\approx 0.12\%)$ . For both base models, we obtain a reference policy $\pi_{\mathrm{SFT}}$ and an aligned policy $\pi_{\theta}$ for every alignment method. More details about hyperparameters, including training and decoding parameters, are described in Appendix C.1.
|
| 131 |
+
|
| 132 |
+
# 5.2 Evaluation
|
| 133 |
+
|
| 134 |
+
Metrics. We use two metrics to evaluate the arguments generated by the aligned models compared to the baseline SFT model: the win-rate and the fallacy-rate. The win-rate measures the proportion of instances where one argument is judged to be of higher quality than the other, while the fallacy-rate represents the proportion of logical fallacies detected in the generated arguments.
|
| 135 |
+
|
| 136 |
+
Human Evaluation. We conduct a human evaluation to compute the win-rate to validate the relevance and quality of the generated arguments. We select 200 samples from the set of generated arguments by the aligned models, along with the corresponding topics, stances (either supporting or counter) and the equivalent arguments generated by the SFT model. Annotators perform a comparative evaluation between the SFT and aligned models by determining which argument is superior or whether both are equally good or bad. Despite the subjective nature of this task, as specific arguments may appeal differently to different individuals, we provide instructions to annotators, including selecting the argument that most clearly addresses the topic and stance. Refer to Fig. 8 for more details. We also perform an annotation task where workers agree or disagree with GPT-4's fallacy predictions. This allows us to validate GPT-4 as a judge for computing the fallacy-rate. Annotators are recruited from Amazon Mechanical Turk (mturk.com) for this task. We limit our selection to native English speakers residing in the United States. The eligibility criteria for annotators include a HIT approval rate of at least $97\%$ and a minimum of 10,000 approved HITs. We present more details, including
|
| 137 |
+
|
| 138 |
+
the annotating instructions in Appendix C.4. The annotators were fairly compensated.
|
| 139 |
+
|
| 140 |
+
Additionally, we conduct an in-depth analysis of fallacy classification by classifying 200 arguments ourselves, aiming to provide an unbiased comparison between human evaluation and GPT-4's classifications. Since identifying logical fallacies requires domain knowledge and can be particularly challenging, our annotations helps assess GPT-4's ability to classify and identify fallacies. This analysis sheds light on the model's capacity to generate logical arguments. Our annotation is depicted in Fig. 4 where the rows are our predictions and the columns are GPT-4's predictions.
|
| 141 |
+
|
| 142 |
+
Automatic Evaluation with GPT-4. We perform a pairwise comparison between the samples generated by $\pi_{\mathrm{SFT}}$ and its counterparts generated by $\pi_{\theta}$ using the win-rate and fallacy-rate. For the win-rate, we prompt GPT-4 to decide which argument is superior, or if both are equally good. For the fallacy-rate, we prompt GPT-4 to evaluate the argument and detect whether the argument is any of the fallacy types out of all the ones listed in Tab. 9, if one is present. A description of the GPT-4 evaluation and prompts is detailed in Appendix A.2.
|
| 143 |
+
|
| 144 |
+
GPT-4's Reliability in Detecting Fallacies. We rely on GPT-4 to compute the fallacy-rate since Li et al. (2024) show GPT-4 performs well on logical fallacy detection, achieving $86\%$ accuracy. To validate GPT-4's reliability for fallacy classification, we use human annotators and ask them to agree or disagree with the predictions. We collect three annotations for each sample, consisting of "Agree" or "Disagree" responses. We compute the percentage agreement rate to assess the agreements between annotators and GPT-4, reflecting how often the majority vote matches GPT-4's prediction. This directly assesses GPT-4's competence in classifying fallacies, making it reliable for computing the fallacy-rate. We also compute the agreement among annotators using Randolph's $\kappa$ (Randolph, 2005), which is well-suited for scenarios with rating imbalances (e.g., frequent "Agree" responses) by reflecting pure agreement, not assuming an expected distribution of categories. Randolph's $\kappa$ is computed as $\kappa = \frac{P_o - P_e}{1 - P_e}$ where $P_o$ is the observed agreement:
|
| 145 |
+
|
| 146 |
+
$$
|
| 147 |
+
P _ {o} = \frac {1}{N n (n - 1)} \sum_ {i = 1} ^ {N} \sum_ {k \in \{A, D \}} n _ {i k} ^ {2} - N n \tag {6}
|
| 148 |
+
$$
|
| 149 |
+
|
| 150 |
+
where $N$ is the number of samples annotated, $n$ is the number of annotators per sample (3 in our case) and $n_{iA}$ is the number of agreements and $n_{iD}$ the number of disagreements in each sample. $P_e = 0.5$ , is the expected agreement for the two categories ('Agree' and 'Disagree'). The agreement rates are displayed in Tab. 3, and the results demonstrate the effectiveness and reliability of using GPT-4 for classifying fallacies, showing substantial agreement among annotators (0.64) and a high majority agreement ratio (0.955). More details are provided in Appendix E.
|
| 151 |
+
|
| 152 |
+
<table><tr><td>Agreement Metric</td><td>Value</td></tr><tr><td>Randolph's-κ</td><td>0.640</td></tr><tr><td>Majority agreement ratio</td><td>0.955</td></tr></table>
|
| 153 |
+
|
| 154 |
+
Table 3: Agreement scores. Randolph's- $\kappa$ reflects agreements among annotators and majority agreement computes the agreement rate between annotators and GPT-4.
|
| 155 |
+
|
| 156 |
+
Additionally, to verify GPT-4's reliability, we (the authors) perform a fallacy classification task on a set of 200 generated arguments at inference as an in-depth analysis of differences with GPT-4. The heatmap in Fig. 4 shows the overlap between our classifications ( $y$ -axis) and GPT-4's ( $x$ -axis). The heatmap entries are normalized to show the degree of alignment between our annotations and GPT-4's predictions. We observe that most classifications cluster around the first few fallacy types—faulty generalization, false causality, and fallacy of relevance. The most significant disparity arises in the fallacy of relevance, where we identify this fallacy more frequently. Additionally, the heatmap shows notable overlap between faulty generalization and false causality—when we classify an argument as false causality, GPT-4 often predicts it as a faulty generalization instead. This suggests that the model may struggle to distinguish between broad overgeneralization and causal misattributions, likely due to subtle linguistic differences in argument structure. Although the overall disagreement is limited, these findings highlight the difficulty of fallacy classification. Even with a solid understanding of logical fallacies, arguments can be ambiguous and complex, making misclassification a frequent challenge.
|
| 157 |
+
|
| 158 |
+
# 5.3 Ablation Study
|
| 159 |
+
|
| 160 |
+
To validate the effectiveness of our method, we conducted two ablation studies:
|
| 161 |
+
|
| 162 |
+
Dataset Uniformity: The first study involves modifying the training dataset to include an equal num
|
| 163 |
+
|
| 164 |
+

|
| 165 |
+
Figure 4: Heatmap for our classification compared to GPT-4's predictions. Rows are the authors' classifications and the columns GPT-4's.
|
| 166 |
+
|
| 167 |
+
ber of samples for each fallacy type. For this study, we created a uniformly distributed dataset by down sampling, resulting in a dataset comprising 2,522 samples (194 per fallacy type).
|
| 168 |
+
|
| 169 |
+
Unweighted Cross-Entropy: The second study examines the impact of applying FIPO with unweighted cross-entropy. This study uses unweighted cross-entropy to demonstrate the justification for our design, as the fallacy misclassification rates are higher with unweighted cross-entropy.
|
| 170 |
+
|
| 171 |
+
Table 4 shows a sharp increase in fallacy rates, underscoring the importance of accounting for the natural distribution of fallacy types and incorporating a weighted cross-entropy classification loss.
|
| 172 |
+
|
| 173 |
+
<table><tr><td></td><td>Fallacy Rates</td></tr><tr><td>Dataset Uniformity</td><td>37.5%</td></tr><tr><td>Unweighted Cross-Entropy</td><td>29%</td></tr><tr><td>FIPO</td><td>17%</td></tr></table>
|
| 174 |
+
|
| 175 |
+
Table 4: Ablation study proving the effectiveness of imbalanced fallacy types and weighted cross-entropy.
|
| 176 |
+
|
| 177 |
+
# 6 Experimental Results
|
| 178 |
+
|
| 179 |
+
As outlined in Section 5.2, our evaluation of the generated arguments focuses on two primary aspects: (i) pairwise comparison of argument quality between the reference policy $\pi_{\mathrm{SFT}}$ and the aligned policies $\pi_{\theta}$ , which is detailed in Section 6.1; and (ii) the analysis of fallacy-rate across different preference optimization methods in Section 6.2.
|
| 180 |
+
|
| 181 |
+
# 6.1 Pairwise Comparison of Different Preference Optimization Methods
|
| 182 |
+
|
| 183 |
+
We perform a pairwise comparison to compute the win-rate between arguments generated by the SFT baselines and all the aligned models. Each argu
|
| 184 |
+
|
| 185 |
+
ment undergoes a manual and automatic (GPT-4-based) comparative evaluation, whose results are shown in Fig. 5 and Tab. 8. For the human evaluation, each sample receives three assessments. Samples lacking majority consensus among annotators are excluded from further analysis. From the human annotated win-rate depicted in Fig. 5, we address the following research questions:
|
| 186 |
+
|
| 187 |
+

|
| 188 |
+
Figure 5: Human evaluation results comparing arguments generated by different preference optimization strategies using Llama-2 as the base model. The win-rate indicates how often each policy outperforms the SFT baseline regarding argument quality.
|
| 189 |
+
|
| 190 |
+
$\mathbf{RQ_1}$ : Are preference optimization methods better than SFT? The aligned policies outperform $\pi_{\mathrm{SFT}}$ in terms of win-rate, indicating an improvement in overall argument quality. DPO, CPO and FIPO are the only methods achieving over $40\%$ win-rate, demonstrating a better ability to generate qualitative arguments. $\pi_{\mathrm{CPO}}$ stands out as it has the highest percentage of wins $(50.3\%)$ , followed closely by $\pi_{\mathrm{DPO}}$ $(46.5\%)$ . We observe similar results in the automatic win-rate evaluation with GPT-4 (Fig. 6), where DPO, CPO and FIPO have the highest win-rates.
|
| 191 |
+
|
| 192 |
+

|
| 193 |
+
Figure 6: GPT-4 evaluation of win-rate and fallacy-rate, for arguments generated by Llama-2. The win-rate represents the frequency with which the aligned policy outperforms SFT. The fallacy-rate measures the proportion of the detected fallacies. The best-performing policy is our proposed FIPO method that achieves the lowest fallacy-rate (17%) and the highest win-rate (63.5%).
|
| 194 |
+
|
| 195 |
+
$\mathbf{RQ}_2$ : Does FIPO improve from existing preference methods? FIPO uses the classification
|
| 196 |
+
|
| 197 |
+
loss defined in Section 4.4 on top of CPO's loss. This is because CPO achieves the best trade-off between win-rate and fallacy-rate despite having a higher loss-rate (Fig. 5). We denote this policy as $\pi_{\mathrm{FIPO}}$ . Although the win-rate of $\pi_{\mathrm{FIPO}}$ is slightly lower at $46\%$ compared to CPO's $50.3\%$ , it is essential to note the significant decrease in loss-rate—from $40.3\%$ to $23\%$ . This reduction indicates that $\pi_{\mathrm{SFT}}$ wins against $\pi_{\mathrm{FIPO}}$ less frequently, suggesting that $\pi_{\mathrm{FIPO}}$ produces arguments that are more qualitative compared to the baseline and to $\pi_{\mathrm{CPO}}$ . This improvement highlights the benefits of including more fine-grained details in FIO's loss, making models more aware of logical fallacies, as FIPO now yields outcomes that are not only equivalent but often superior to those generated by the $\pi_{\mathrm{SFT}}$ . Similar observations are drawn from the automatic win-rate evaluation in Appendix D.2, where GPT-4 served as the judge to evaluate the arguments and only CPO achieved over $50\%$ for both Llama-2 and Mistral, as shown in Tab. 8.
|
| 198 |
+
|
| 199 |
+
# 6.2 Results for Fallacy-Rate and Types
|
| 200 |
+
|
| 201 |
+
Evaluating text segments to identify logical fallacies poses inherent challenges for humans. Detecting such fallacies demands an extensive understanding of logical principles and argumentative structures. Without a robust grasp of logical fallacies, differentiating between valid and flawed reasoning becomes difficult. Additionally, personal biases and preconceptions can also cloud judgment, leading to overlooked fallacies or biased interpretations of arguments. We report GPT-4's evaluations in Tab. 5. Based on the results, we address the following research questions:
|
| 202 |
+
|
| 203 |
+
$\mathbf{RQ}_3$ : Do preference optimization methods mitigate logical fallacy errors? The aligned policies produce fewer fallacies compared to the SFT baselines. Specifically, every alignment method outperforms $\pi_{\mathrm{SFT}}$ for Llama-2. For Llama-2 and Mistral, DPO is the method that improves the least, and even produces more fallacies than $\pi_{\mathrm{SFT}}$ with Mistral, having a higher fallacy-rate. The other methods (PPO, CPO and KTO) consistently outperform SFT and produce fewer fallacies.
|
| 204 |
+
|
| 205 |
+
$\mathbf{RQ}_4$ : Does FIPO further reduce logical fallacy errors? The least fallacy producing policy is $\pi_{\mathrm{FIPO}}$ , achieving a fallacy-rate of $17 \%$ for Llama-2, outperforming the previous best of $26 \%$ (PPO). For Mistral, FIPO has a fallacy-rate of $19.5\%$ , outperforming the previous best of $27.75\%$
|
| 206 |
+
|
| 207 |
+
<table><tr><td rowspan="2">Fallacy Types</td><td colspan="6">Llama-2 (7B)</td><td colspan="6">Mistral (7B)</td></tr><tr><td>SFT</td><td>DPO</td><td>PPO</td><td>CPO</td><td>KTO</td><td>FIPO</td><td>SFT</td><td>DPO</td><td>PPO</td><td>CPO</td><td>KTO</td><td>FIPO</td></tr><tr><td>Faulty Generalization</td><td>27.5</td><td>21</td><td>17.5</td><td>19.25</td><td>21</td><td>7</td><td>23</td><td>24</td><td>22.25</td><td>22.25</td><td>21</td><td>9.5</td></tr><tr><td>False Causality</td><td>2.5</td><td>5</td><td>4.25</td><td>4.75</td><td>4.5</td><td>3.5</td><td>5.25</td><td>5.75</td><td>5</td><td>4</td><td>3.5</td><td>4</td></tr><tr><td>Appeal To Emotion</td><td>1</td><td>1.25</td><td>0.75</td><td>1.75</td><td>-</td><td>2.5</td><td>1.25</td><td>1.75</td><td>0.25</td><td>1.5</td><td>1.75</td><td>3</td></tr><tr><td>Equivocation</td><td>1</td><td>1</td><td>1.25</td><td>0.25</td><td>0.75</td><td>-</td><td>0.75</td><td>-</td><td>0.5</td><td>0.25</td><td>0.25</td><td>-</td></tr><tr><td>Fallacy of Relevance</td><td>0.5</td><td>0.25</td><td>0.75</td><td>0.25</td><td>0.25</td><td>-</td><td>-</td><td>0.5</td><td>-</td><td>0.75</td><td>0.25</td><td>0.5</td></tr><tr><td>Circular Reasoning</td><td>1</td><td>-</td><td>1.25</td><td>-</td><td>0.75</td><td>1.5</td><td>0.75</td><td>0.25</td><td>-</td><td>-</td><td>-</td><td>0.5</td></tr><tr><td>Ad Populum</td><td>-</td><td>1.25</td><td>-</td><td>0.5</td><td>-</td><td>-</td><td>0.25</td><td>0.25</td><td>0.25</td><td>1</td><td>0.25</td><td>1</td></tr><tr><td>False Dilemma</td><td>1</td><td>1.25</td><td>-</td><td>1</td><td>0.25</td><td>2.5</td><td>1</td><td>1</td><td>0.5</td><td>0.25</td><td>0.75</td><td>1</td></tr><tr><td>Ad Hominem</td><td>-</td><td>-</td><td>0.25</td><td>0.25</td><td>0.25</td><td>-</td><td>0.25</td><td>0.25</td><td>0.25</td><td>-</td><td>-</td><td>0.5</td></tr><tr><td>Not A Fallacy</td><td>65.5</td><td>69</td><td>74</td><td>72</td><td>72.25</td><td>83</td><td>67.5</td><td>66.25</td><td>71</td><td>70</td><td>72.25</td><td>80.5</td></tr><tr><td>Fallacy-Rate ↓</td><td>34.5</td><td>31</td><td>26</td><td>28</td><td>27.75</td><td>17</td><td>32.5</td><td>33.75</td><td>29</td><td>30</td><td>27.75</td><td>19.5</td></tr></table>
|
| 208 |
+
|
| 209 |
+
(KTO) (Tab. 5). More specifically, FIPO, based on CPO, beats CPO by $11\%$ and $10.5\%$ for Llama-2 and Mistral, respectively. This highlights the utility of the classification loss, indicating that the policies have a better understanding of logical fallacies than regular preference optimization.
|
| 210 |
+
|
| 211 |
+
$\mathbf{RQ}_5$ : What is the most observed fallacy type? The most frequently observed fallacy produced across all policies is Faulty Generalization. For $\pi_{\mathrm{FIPO}}$ , the occurrence for this type is only $7\%$ , effectively integrating the concept of generalization. An example of this occurrence is illustrated in Tab. 7. As Faulty Generalization is the most frequent fallacy type in our preference dataset (18%, Fig. 3), the weight assigned to this fallacy type in Equation 4 is the largest. Consequently, a higher classification loss is incurred if these fallacies are misclassified, enhancing the language model's ability to accurately identify and reduce occurrences of Faulty Generalization. We also observe that of the 13 fallacy types, GPT-4 never classifies arguments in the following classes: Fallacy of logic, credibility, extension and intentional.
|
| 212 |
+
|
| 213 |
+
# 6.3 Out-of-Domain Analysis
|
| 214 |
+
|
| 215 |
+
To showcase the effectiveness of alignment methods in argument generation, we sample a test-set of 100 different topics from the Debatepedia dataset (Cabrio and Villata, 2012), and perform inference using the models previously trained on our preference dataset. Using GPT-4 as an evaluator, we compute the win-rates and fallacy-rates, presented in Tab. 6. Using Llama-2 as the base model, results show that $\pi_{\mathrm{FIPO}}$ is the second-best policy in terms of minimizing fallacies $(55\%)$ , slightly be
|
| 216 |
+
|
| 217 |
+
hind $\pi_{\mathrm{KTO}}$ (56%). We also find that FIPO achieves the highest win-rate, winning 62% of the times against $\pi_{\mathrm{SFT}}$ . Interestingly, our results reveal that FIPO helps to reduce False Causality and Fallacy of Relevance fallacies.
|
| 218 |
+
|
| 219 |
+
Table 5: Fallacy-rate (in percentages) of each policy, as detected by GPT-4. We omit other fallacy types as none of them were reported by GPT-4. FIPO is the top-performing method, producing the least amount of fallacies.
|
| 220 |
+
|
| 221 |
+
<table><tr><td>Fallacy Types</td><td>SFT</td><td>DPO</td><td>PPO</td><td>CPO</td><td>KTO</td><td>FIPO</td></tr><tr><td>Faulty Generalization</td><td>17</td><td>20</td><td>17</td><td>24</td><td>17</td><td>18</td></tr><tr><td>False Causality</td><td>9</td><td>7</td><td>6</td><td>8</td><td>8</td><td>5</td></tr><tr><td>Appeal To Emotion</td><td>7</td><td>16</td><td>13</td><td>13</td><td>7</td><td>12</td></tr><tr><td>Fallacy of Relevance</td><td>12</td><td>6</td><td>6</td><td>10</td><td>3</td><td>-</td></tr><tr><td>Ad Populum</td><td>3</td><td>4</td><td>2</td><td>1</td><td>1</td><td>-</td></tr><tr><td>False Dilemma</td><td>6</td><td>6</td><td>6</td><td>4</td><td>6</td><td>7</td></tr><tr><td>Equivocation</td><td>-</td><td>-</td><td>2</td><td>1</td><td>1</td><td>2</td></tr><tr><td>Circular Reasoning</td><td>4</td><td>2</td><td>-</td><td>2</td><td>1</td><td>1</td></tr><tr><td>Fallacy-Rate ↓</td><td>58</td><td>61</td><td>52</td><td>63</td><td>44</td><td>45</td></tr><tr><td>Win-Rate vs. SFT ↑</td><td>-</td><td>59</td><td>54</td><td>43</td><td>55</td><td>62</td></tr></table>
|
| 222 |
+
|
| 223 |
+
Table 6: Fallacies generated by different alignment methods in the out-of-domain setting, detected by GPT-4. We omit the other fallacy types, as none of them were reported as such by GPT-4. We also evaluate the win-rate and observe that FIPO achieves the highest one and is the second-best policy at not generating fallacies.
|
| 224 |
+
|
| 225 |
+
# 7 Conclusion
|
| 226 |
+
|
| 227 |
+
In this work, we investigate the impact of logical fallacies on argument generation and introduce FIPO, a novel framework designed to improve the logical soundness of arguments by including a classification loss during the preference optimization phase. Both human and automatic evaluations show that our method produces higher-quality arguments and achieves lower fallacy-rates. These findings underscore the importance of addressing logical fallacies in improving argument generation.
|
| 228 |
+
|
| 229 |
+
# Limitations
|
| 230 |
+
|
| 231 |
+
Although various preference optimization strategies have shown improvement over the SFT baseline in reducing fallacious arguments, the margin remains modest. This may be attributed to several factors: our assumption that the original dataset (Saha et al., 2021) was free of fallacies, the inherent complexity and diversity of fallacies which complicates effective detection, and the variability in model performance, particularly the weaker results from the Mistral model compared to Llama-2. Additionally, the limited size of our dataset and the brevity of arguments present further challenges, as the lack of contextual cues can hinder the models' ability to identify and avoid fallacies consistently.
|
| 232 |
+
|
| 233 |
+
# Ethics Statement
|
| 234 |
+
|
| 235 |
+
In this paper, we experiment with well-acknowledged datasets. Our framework improves argument generation in LLMs, which may encode biases related to race, gender, and other attributes (Weidinger et al., 2021; Sheng et al., 2020). As our work does not mitigate these biases, the models may still reflect harmful behaviors. We recommend users deploying our model off-the-shelf evaluate potential harm to protected groups and apply appropriate mitigation. While improving argument generation is valuable, it poses risks if misused. Bad actors could exploit these capabilities to amplify disinformation, manipulate public opinion, or influence democratic processes by spreading persuasive yet harmful narratives. To address this, robust safeguards, such as usage policies, monitoring, and detection tools, are critical. Finally, our annotation task relied on AMT workers evaluating model-generated arguments, particularly for logical fallacies, which are complex to assess. Workers were English-speaking and paid adequately for their time to ensure fairness.
|
| 236 |
+
|
| 237 |
+
# Acknowledgment
|
| 238 |
+
|
| 239 |
+
We acknowledge the support of the ICT-48 Network of AI Research Excellence Center "TAILOR" (EU Horizon 2020, GA No 952215). Antoine Bosselut gratefully acknowledges the support of the Swiss National Science Foundation (No. 215390), Innosuisse (PFFS-21-29), the EPFL Center for Imaging, Sony Group Corporation, and the Allen Institute for AI.
|
| 240 |
+
|
| 241 |
+
# References
|
| 242 |
+
|
| 243 |
+
Aristotle. 2006. On Sophistical Refutations. The Internet Classics Archive.
|
| 244 |
+
Steven Bird and Edward Loper. 2004. NLTK: The natural language toolkit. In Proceedings of the ACL Interactive Poster and Demonstration Sessions, pages 214-217, Barcelona, Spain. Association for Computational Linguistics.
|
| 245 |
+
Elena Cabrio and Serena Villata. 2012. Natural language arguments: A combined approach. Frontiers in Artificial Intelligence and Applications, 242.
|
| 246 |
+
Guizhen Chen, Liying Cheng, Anh Tuan Luu, and Li-dong Bing. 2023a. Exploring the potential of large language models in computational argumentation. ArXiv, abs/2311.09022.
|
| 247 |
+
Meiqi Chen, Yubo Ma, Kaitao Song, Yixin Cao, Yan Zhang, and Dongsheng Li. 2023b. Learning to teach large language models logical reasoning. arXiv preprint arXiv:2310.09158.
|
| 248 |
+
Paul F Christiano, Jan Leike, Tom Brown, Miljan Martic, Shane Legg, and Dario Amodei. 2017. Deep reinforcement learning from human preferences. Advances in neural information processing systems, 30.
|
| 249 |
+
Kevin Clark, Minh-Thang Luong, Quoc V. Le, and Christopher D. Manning. 2020. ELECTRA: pretraining text encoders as discriminators rather than generators. CoRR, abs/2003.10555.
|
| 250 |
+
Kawin Ethayarajh, Winnie Xu, Niklas Muennighoff, Dan Jurafsky, and Douwe Kiela. 2024. Kto: Model alignment as prospect theoretic optimization.
|
| 251 |
+
Jonathan St BT Evans. 2002. Logic and human reasoning: an assessment of the deduction paradigm. Psychological bulletin, 128(6):978.
|
| 252 |
+
Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. 2021. Lora: Low-rank adaptation of large language models. arXiv preprint arXiv:2106.09685.
|
| 253 |
+
Xinyu Hua, Zhe Hu, and Lu Wang. 2019. Argument generation with retrieval, planning, and realization. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2661-2672, Florence, Italy. Association for Computational Linguistics.
|
| 254 |
+
Xinyu Hua and Lu Wang. 2018. Neural argument generation augmented with externally retrieved evidence. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 219-230, Melbourne, Australia. Association for Computational Linguistics.
|
| 255 |
+
Albert Q Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, et al. 2023. Mistral 7b. arXiv preprint arXiv:2310.06825.
|
| 256 |
+
|
| 257 |
+
Zhijing Jin, Abhinav Lalwani, Tejas Vaidhya, Xiaoyu Shen, Yiwen Ding, Zhiheng Lyu, Mrinmaya Sachan, Rada Mihalcea, and Bernhard Schoelkopf. 2022. Logical fallacy detection. In *Findings of the Association for Computational Linguistics: EMNLP* 2022, pages 7180-7198, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
|
| 258 |
+
Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for open-domain question answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6769-6781, Online. Association for Computational Linguistics.
|
| 259 |
+
Harrison Lee, Samrat Phatale, Hassan Mansoor, Kellie Lu, Thomas Mesnard, Colton Bishop, Victor Carbune, and Abhinav Rastogi. 2023. Rlaif: Scaling reinforcement learning from human feedback with ai feedback. arXiv preprint arXiv:2309.00267.
|
| 260 |
+
Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Kuttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, et al. 2020. Retrieval-augmented generation for knowledge-intensive nlp tasks. Advances in Neural Information Processing Systems, 33:9459-9474.
|
| 261 |
+
Yanda Li, Dixuan Wang, Jiaqing Liang, Guochao Jiang, Qianyu He, Yanghua Xiao, and Deqing Yang. 2024. Reason from fallacy: Enhancing large language models' logical reasoning through logical fallacy understanding. arXiv preprint arXiv:2404.04293.
|
| 262 |
+
Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In Text Summarization Branches Out, pages 74-81, Barcelona, Spain. Association for Computational Linguistics.
|
| 263 |
+
Yang Liu, Dan Iter, Yichong Xu, Shuohang Wang, Ruochen Xu, and Chenguang Zhu. 2023. G-eval: NLG evaluation using gpt-4 with better human alignment. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 2511–2522, Singapore. Association for Computational Linguistics.
|
| 264 |
+
Callistus Ireneous Nakpih and Simone Santini. 2020. Automated discovery of logical fallacies in legal argumentation. International Journal of Artificial Intelligence and Applications (IJAIA), 11.
|
| 265 |
+
Kishore Papineni, Salim Roukos, Todd Ward, and Wei Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311-318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.
|
| 266 |
+
Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zach DeVito, Martin Raison, Alykhan Tejani,
|
| 267 |
+
|
| 268 |
+
Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. PyTorch: an imperative style, high-performance deep learning library. Curran Associates Inc., Red Hook, NY, USA.
|
| 269 |
+
Fabian Pedregosa, Gaël Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, et al. 2011. Scikit-learn: Machine learning in python. Journal of Machine Learning Research, 12(Oct):2825-2830.
|
| 270 |
+
Baolin Peng, Chunyuan Li, Pengcheng He, Michel Galley, and Jianfeng Gao. 2023. Instruction tuning with gpt-4.
|
| 271 |
+
Rafael Rafailov, Archit Sharma, Eric Mitchell, Christopher D Manning, Stefano Ermon, and Chelsea Finn. 2023. Direct preference optimization: Your language model is secretly a reward model. In Thirty-seventh Conference on Neural Information Processing Systems.
|
| 272 |
+
Justus J Randolph. 2005. Free-marginal multirater kappa (multirater k [free]): An alternative to fleiss' fixed-marginal multirater kappa. Advances in Data Analysis and Classification.
|
| 273 |
+
Ramon Ruiz-Dolz and John Lawrence. 2023. Detecting argumentative fallacies in the wild: Problems and limitations of large language models. In Proceedings of the 10th Workshop on Argument Mining, pages 1-10, Singapore. Association for Computational Linguistics.
|
| 274 |
+
Sougata Saha and Rohini Srihari. 2023. ArgU: A controllable factual argument generator. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 8373-8388, Toronto, Canada. Association for Computational Linguistics.
|
| 275 |
+
Swarnadeep Saha, Prateek Yadav, Lisa Bauer, and Mohit Bansal. 2021. *ExplaGraphs: An explanation graph generation task for structured commonsense reasoning*. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 7716-7740, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
|
| 276 |
+
Timo Schick and Hinrich Schütze. 2021. Generating datasets with pretrained language models. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 6943-6951, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
|
| 277 |
+
Benjamin Schiller, Johannes Daxenberger, and Iryna Gurevych. 2021. Aspect-controlled neural argument generation. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 380-396, Online. Association for Computational Linguistics.
|
| 278 |
+
|
| 279 |
+
John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. 2017. Proximal policy optimization algorithms.
|
| 280 |
+
Zhihong Shao, Yeyun Gong, Yelong Shen, Minlie Huang, Nan Duan, and Weizhu Chen. 2023. Synthetic prompting: generating chain-of-thought demonstrations for large language models. In Proceedings of the 40th International Conference on Machine Learning, ICML'23. JMLR.org.
|
| 281 |
+
Emily Sheng, Kai-Wei Chang, Prem Natarajan, and Nanyun Peng. 2020. Towards Controllable Biases in Language Generation. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 3239-3254, Online. Association for Computational Linguistics.
|
| 282 |
+
Zhivar Sourati, Filip Ilievski, Hóng-Án Sandlin, and Alain Mermoud. 2023. Case-based reasoning with language models for classification of logical fallacies. arXiv preprint arXiv:2301.11879.
|
| 283 |
+
Zhiqing Sun, Yikang Shen, Qinhong Zhou, Hongxin Zhang, Zhenfang Chen, David Daniel Cox, Yiming Yang, and Chuang Gan. 2023. Principle-driven self-alignment of language models from scratch with minimal human supervision. In Thirty-seventh Conference on Neural Information Processing Systems.
|
| 284 |
+
Christopher W Tindale. 2007. Fallacies and argument appraisal. Cambridge University Press.
|
| 285 |
+
Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothee Lacroix, Baptiste Roziere, Naman Goyal, Eric Hambro, Faisal Azhar, et al. 2023. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971.
|
| 286 |
+
Henning Wachsmuth, Nona Naderi, Ivan Habernal, Yufang Hou, Graeme Hirst, Iryna Gurevych, and Benno Stein. 2017. Argumentation quality assessment: Theory vs. practice. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 250-255, Vancouver, Canada. Association for Computational Linguistics.
|
| 287 |
+
Douglas Walton, Christopher Reed, and Fabrizio Macagno. 2008. Argumentation schemes. Cambridge University Press.
|
| 288 |
+
Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, Noah A. Smith, Daniel Khashabi, and Hannaneh Hajishirzi. 2023. Self-instruct: Aligning language models with self-generated instructions. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 13484-13508, Toronto, Canada. Association for Computational Linguistics.
|
| 289 |
+
Laura Weidinger, John Mellor, Maribeth Rauh, Conor Griffin, Jonathan Uesato, Po-Sen Huang, Myra Cheng, Mia Glaese, Borja Balle, Atoosa Kasirzadeh, et al. 2021. Ethical and social risks of harm from language models. arXiv preprint arXiv:2112.04359.
|
| 290 |
+
|
| 291 |
+
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumont, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Association for Computational Linguistics.
|
| 292 |
+
Haoran Xu, Amr Sharaf, Yunmo Chen, Weiting Tan, Lingfeng Shen, Benjamin Van Durme, Kenton Murray, and Young Jin Kim. 2024. Contrastive preference optimization: Pushing the boundaries of llm performance in machine translation.
|
| 293 |
+
Daniel M Ziegler, Nisan Stiannon, Jeffrey Wu, Tom B Brown, Alec Radford, Dario Amodei, Paul Christiano, and Geoffrey Irving. 2019. Fine-tuning language models from human preferences. arXiv preprint arXiv:1909.08593.
|
| 294 |
+
|
| 295 |
+
# A Data Augmentation and Evaluation with LLMs
|
| 296 |
+
|
| 297 |
+
# A.1 Generating Arguments with ChatGPT
|
| 298 |
+
|
| 299 |
+
Our prompt design for ChatGPT to generate fallacies follows a similar heuristic to (Liu et al., 2023), by introducing the task, defining the fallacy type it must generate, along with two examples of that particular fallacy type. Following the distribution in Fig. 3, we generate four fallacies for the same topic and feed the following prompt to ChatGPT to generate fallacies as negative preference data:
|
| 300 |
+
|
| 301 |
+
```txt
|
| 302 |
+
You are given a topic $T$ . Your task is to generate a $\{$ supporting' or counter\} argument in the form of a $f$ -typea logical fallacy in the context of the topic. It should not be longer than 25 words.
|
| 303 |
+
$f$ -type fallacy is defined as: {definition}
|
| 304 |
+
examples of $f$ -type are:
|
| 305 |
+
{example 1}
|
| 306 |
+
{example 2}
|
| 307 |
+
Here is an example of $f$ -type fallacy argument:
|
| 308 |
+
{example of an argumentative fallacy}
|
| 309 |
+
return {
|
| 310 |
+
"topic": $T$ , "fallacy": $f$ -type, "argument": <...>
|
| 311 |
+
}
|
| 312 |
+
```
|
| 313 |
+
|
| 314 |
+
${}^{a}$ Fallacy type that can be any of the thirteen types described in Table 9
|
| 315 |
+
|
| 316 |
+
Some examples of generated logical fallacies include: "I know someone who smoked cannabis and became successful. Therefore, everyone who smokes cannabis will be successful.", "I know a few people who spend too much time on social media and have no real-life friends. Therefore, social media is terrible for society". Tab. 10 presents examples of samples in our preference dataset. Our augmented fallacy argument dataset consists of the train-test split in Tab. 2.
|
| 317 |
+
|
| 318 |
+
# A.2 Prompting GPT-4 for Evaluation
|
| 319 |
+
|
| 320 |
+
To evaluate whether $\pi_{\theta}$ for a given method $\theta$ generates logical arguments compared to $\pi_{\mathrm{SFT}}$ , we use GPT-4 and evaluate the win-rate (e.g., how often does $\pi_{\theta}$ produce better arguments) by prompting GPT-4 with:
|
| 321 |
+
|
| 322 |
+
```txt
|
| 323 |
+
Which of these arguments is better for the topic $t$ and stance $s$ :
|
| 324 |
+
1. $\pi_{\mathrm{SFT}}(y|t,s)$
|
| 325 |
+
2. $\pi_{\theta}(y|t,s)$
|
| 326 |
+
If both arguments are equally good, return 3 (Tie). The better argument is: <response>
|
| 327 |
+
```
|
| 328 |
+
|
| 329 |
+
We also evaluate how often models produce logical fallacies, which we call fallacy-rate, by prompting ChatGPT with:
|
| 330 |
+
|
| 331 |
+
```txt
|
| 332 |
+
Consider the following topic $t$ , stance $s$ {supporting or counter} and argument $a = \pi_{\theta}(y|t,s)$ :
|
| 333 |
+
Topic: $t$
|
| 334 |
+
Argument: $a$
|
| 335 |
+
Out of all the following logical fallacy types {list of types from Tab. 9}
|
| 336 |
+
would you qualify $a$ as one of these logical fallacies? If not - return "None".
|
| 337 |
+
If yes, which logical fallacy type is it? Let $f$ -type be your answer. Return {"topic": $t$ , "text": $a$ , "fallacy type": $f$ -type}
|
| 338 |
+
```
|
| 339 |
+
|
| 340 |
+
# B Preference Optimization
|
| 341 |
+
|
| 342 |
+
Preference optimization is a crucial step in aligning language models to generate outputs that meet user preferences and objectives effectively. It involves the process of ensuring that the goals and preferences of AI systems align with those of their human users. To demonstrate models are capable of learning to distinguish logically sound text from logical fallacies, we assess the performance of four different
|
| 343 |
+
|
| 344 |
+
preference optimization techniques, including PPO (Schulman et al., 2017), DPO (Rafailov et al., 2023), CPO (Xu et al., 2024) and KTO (Ethayarajh et al., 2024).
|
| 345 |
+
|
| 346 |
+
PPO. One widely used reinforcement learning optimization algorithm within RLHF is Proximal Policy Optimization (PPO). PPO (Schulman et al., 2017) is particularly favored due to its stability and efficiency. It iteratively updates the model's policy parameters by maximizing the expected cumulative reward while constraining the policy updates to a proximity threshold, preventing large policy changes that could destabilize learning.
|
| 347 |
+
|
| 348 |
+
DPO. More recently, Rafailov et al. (2023) introduced Direct Preference Optimization, which skips the reward modelling part that is necessary for PPO. DPO leverages an analytical mapping from reward functions to optimal policies, to transform a loss function over reward functions into a loss function over policies and avoids fitting a reward model, while still optimizing under given preferences.
|
| 349 |
+
|
| 350 |
+
CPO. Another recently introduced method is Contrastive Preference Optimization (CPO). Xu et al. (2024) introduced CPO as a derivation of DPO, to address some shortcomings of DPO, including memory and speed inefficiencies. (Xu et al., 2024) focuses mainly on machine translation, but the method can also be adapted to regular preference optimization for other tasks. They also incorporate a behaviour cloning regularizer to ensure that the policy does not deviate from the preferred data distribution.
|
| 351 |
+
|
| 352 |
+
KTO. Finally, the last method we evaluate is Kahneman-Tversky Optimization (KTO) (Ethayarajh et al., 2024). The authors introduce the concept of human-aware loss functions (HALOs), which implicitly model human biases and have been shown to perform better than non-HALOs. Their approach directly maximizes the utility of generations instead of maximizing preference likelihood, as is commonly done.
|
| 353 |
+
|
| 354 |
+
# C Details of Experimental Setup
|
| 355 |
+
|
| 356 |
+
# C.1 Hyperparameters
|
| 357 |
+
|
| 358 |
+
Training Parameters. For both base models, we train $\pi_{\mathrm{SFT}}$ and $\pi_{\theta}$ on 2 A100 GPUs for 3 epochs, using an Adam optimizer, with a learning rate of $2^{-4}$ . For the LoRA configuration, we select a rank $r = 16$ , $\alpha = 32$ and a dropout of $5^{-2}$ . Regarding the specific training details of each method, we use $\beta = 0.25$ for DPO (Rafailov et al., 2023), which controls how much $\pi_{\theta}$ deviates from the reference model $\pi_{\mathrm{SFT}}$ . For both CPO (Xu et al., 2024) and KTO (Ethayarajh et al., 2024), we use $\beta = 0.1$ , which controls the implicit reward. Regarding the reward model for PPO (Schulman et al., 2017), we use a binary logical fallacy classifier we trained using the chosen responses as non-fallacies and the rejected responses as fallacies. The classifier achieves accuracy and F1 over $95\%$ in the detection of fallacy arguments, which makes us confident in using the model as a reward model for this particular task and we use the logits as rewards.
|
| 359 |
+
|
| 360 |
+
Decoding Parameters. At inference time, we generate arguments with the aligned models using the same decoding strategy. We use nucleus-sampling with $p = 0.75$ and top- $k$ sampling with $k = 10$ .
|
| 361 |
+
|
| 362 |
+
Hyperparameter Selection for FIPO. To optimize our custom loss function, defined in Equation 5, we conduct a series of experiments manipulating the hyperparameter $\lambda$ (Equation 5) and the weights for the cross-entropy loss (Equation 4). Our initial step involved tuning the weights for the loss function. Given that our dataset consists of $n$ pairs of preferred and dispreferred arguments as logical fallacies, it is crucial to differentiate different fallacy classes. It is thus more suitable to set the weights for each fallacy type as their frequency in the dataset, given by Equation 4 and the weight for the preferred responses as little as possible, which we set as the minimum of all the fallacy frequencies. We also evaluate different settings of $\lambda$ , testing values of 0.1, 0.3, and 0.6. Our findings indicate that a higher $\lambda$ effectively reduced the number of fallacies produced by the policies but adversely impacted the argument quality (win-rate). Conversely, a $\lambda$ of 0.1 had minimal impact on improving the fallacy-rate. After assessing the trade-offs, we determined that a $\lambda$ value of 0.3 provided the optimal balance between minimizing fallacies and maintaining a reasonable win-rate.
|
| 363 |
+
|
| 364 |
+
# C.2 Context for Retrieval Augmented Generation (RAG)
|
| 365 |
+
|
| 366 |
+
We use the wiki-dpr database to add contextual information to prompts. RAG (Lewis et al., 2020) ensures that arguments are grounded in factual accuracy and contextually relevant information by retrieving relevant information from pre-existing knowledge sources. We retrieve relevant documents with the topics and add them to the prompts as context. For example, given the topic Factory farming should not be banned, the knowledge extracted includes "The practice of dairy production in a factory farm environment has been criticized by animal welfare activists. The U.S. Food and Drug Administration states that no 'significant difference' has been found between milk from treated and non-treated cows. [...]
|
| 367 |
+
|
| 368 |
+
# C.3 Licenses of Artifacts
|
| 369 |
+
|
| 370 |
+
The artifacts we employ, including datasets, packages, and models, are detailed in Tab. 11. Our usage of these artifacts is consistent with their intended purposes.
|
| 371 |
+
|
| 372 |
+
# C.4 Detailed Setup of Human Annotations in Amazon Mechanical Turk
|
| 373 |
+
|
| 374 |
+
The privacy of these collected annotations for the pairwise comparison is under the university policy, and we use Amazon Mechanical Turk's services. The webpage we use for the comparative study by human annotators is provided in Fig. 8. The workers are paid fairly for their annotating work.
|
| 375 |
+
|
| 376 |
+
# D Additional Experimental Results
|
| 377 |
+
|
| 378 |
+
# D.1 Detailed fallacy-rates in Zero-Shot Setting
|
| 379 |
+
|
| 380 |
+
Tab. 7 presents the distribution of fallacies detected with GPT-4 in the zero-shot setting using Llama-2 and Mistral in scenario $\mathbf{S}_1$ : Prompting the models without any explicit instruction using logical fallacies-Simply providing a topic and stance and asking the model to generate an argument - and scenario $\mathbf{S}_2$ : including a definition of logical fallacy in the prompt, as well as two examples of logical fallacies and a clear instruction to not generate a logical fallacy.
|
| 381 |
+
|
| 382 |
+
<table><tr><td>Fallacy Types</td><td>Llama-2</td><td>Mistral</td><td>ChatGPT</td><td>Llama-2-RAG</td><td>Llama-2</td><td>Mistral</td><td>ChatGPT</td><td>Llama-2-RAG</td></tr><tr><td>Faulty Generalization</td><td>13</td><td>10</td><td>9</td><td>12</td><td>9</td><td>8</td><td>6</td><td>8</td></tr><tr><td>False Causality</td><td>3</td><td>4</td><td>4</td><td>5</td><td>4</td><td>3</td><td>3</td><td>3</td></tr><tr><td>Appeal To Emotion</td><td>2</td><td>1</td><td>1</td><td>2</td><td>1</td><td>2</td><td>1</td><td>1</td></tr><tr><td>Fallacy of Relevance</td><td>1</td><td>1</td><td>-</td><td>3</td><td>2</td><td>1</td><td>2</td><td>1</td></tr><tr><td>False Dilemma</td><td>1</td><td>1</td><td>2</td><td>3</td><td>-</td><td>-</td><td>1</td><td>1</td></tr><tr><td>Circular Reasoning</td><td>29</td><td>21</td><td>3</td><td>12</td><td>5</td><td>2</td><td>-</td><td>5</td></tr><tr><td>Fallacy of Logic</td><td>6</td><td>-</td><td>2</td><td>-</td><td>-</td><td>2</td><td>1</td><td>-</td></tr><tr><td>Fallacy-Rate</td><td>55</td><td>38</td><td>21</td><td>37</td><td>21</td><td>18</td><td>14</td><td>19</td></tr><tr><td>Not A Fallacy</td><td>45</td><td>62</td><td>79</td><td>63</td><td>79</td><td>82</td><td>86</td><td>81</td></tr></table>
|
| 383 |
+
|
| 384 |
+
Table 7: Fallacy-rate according to GPT-4 on arguments generated by Llama-2, Mistral, ChatGPT, and a Llama-2 based RAG model on scenario $\mathbf{S}_1$ ( ) and scenario $\mathbf{S}_2$ ( ).
|
| 385 |
+
|
| 386 |
+
# D.2 Results Regarding Win-Rate
|
| 387 |
+
|
| 388 |
+
Tab. 8 presents the results of win-rates computed by GPT-4 by comparing the policies with the corresponding SFT models and comparing which models win more often.
|
| 389 |
+
|
| 390 |
+
Another interesting observation from Fig. 5 and Tab. 8 is the tie-rate. Annotating concise arguments to determine superiority is challenging for humans, often leading to the simpler selection of $\text{Tie}$ when both arguments adequately address the topic. In fact, $\text{Tie}$ was chosen more frequently in human evaluations compared to the automatic evaluation by GPT-4.
|
| 391 |
+
|
| 392 |
+
# D.3 Examples of Fallacies Produced during Inference
|
| 393 |
+
|
| 394 |
+
Tab. 7 shows a few examples of arguments and fallacies produced by the models on a set of three topics.
|
| 395 |
+
|
| 396 |
+
<table><tr><td>Win-Rate</td><td>πSFT</td><td>πθ</td><td>Tie</td><td>πSFT</td><td>πθ</td><td>Tie</td></tr><tr><td></td><td colspan="3">Llama-2 (7B)</td><td colspan="3">Mistral (7B)</td></tr><tr><td>πSFT vs. πDPO</td><td>35</td><td>61</td><td>4</td><td>49</td><td>48</td><td>3</td></tr><tr><td>πSFT vs. πPPO</td><td>38</td><td>49</td><td>13</td><td>48</td><td>49</td><td>3</td></tr><tr><td>πSFT vs. πCPO</td><td>43.5</td><td>52.5</td><td>4</td><td>46.75</td><td>51.5</td><td>1.75</td></tr><tr><td>πSFT vs. πKTO</td><td>40.5</td><td>46</td><td>13.5</td><td>44.5</td><td>51.5</td><td>4</td></tr><tr><td>πSFT vs. πFIPO</td><td>33</td><td>63.5</td><td>3.5</td><td>27</td><td>68</td><td>5</td></tr></table>
|
| 397 |
+
|
| 398 |
+
Table 8: Win-rate according to GPT-4. $\pi_{\theta}$ is the aligned policy, where $\theta$ designates the respective alignment method (e.g., when comparing $\pi_{\mathbf{SFT}}$ with $\pi_{\mathbf{DPO}}$ , then the value 61 means $\pi_{\mathbf{DPO}}$ wins 61% of the time.
|
| 399 |
+
|
| 400 |
+
<table><tr><td>Education</td><td>Surrogacy</td><td>Economics</td></tr><tr><td>Acting takes children away from their education and normal activities.
|
| 401 |
+
Actors are rich and famous and should not be denied the opportunity to make more money.
|
| 402 |
+
(Faulty Generalization).
|
| 403 |
+
School uniforms should not be allowed.
|
| 404 |
+
Schools uniform helps students to be more focused on their studies.
|
| 405 |
+
(False Causality).</td><td>Surrogacy is an advantage for people.
|
| 406 |
+
The surplus of babies will cause more crime.
|
| 407 |
+
(Faulty Generalization).
|
| 408 |
+
France, German, Italy, and Spain all believe surrogacy is exploitation, and have it prohibited.
|
| 409 |
+
Surrogates are not exploited because they are paid.
|
| 410 |
+
(Faulty Generalization).</td><td>It is important to have subsidized student loans, so that all students can go to college.
|
| 411 |
+
Student loan debt is a problem for many people.
|
| 412 |
+
It should be stopped.
|
| 413 |
+
(False Dilemma).
|
| 414 |
+
Subsidizing journalism allows for efficient information.
|
| 415 |
+
By subsidizing journalists, they are being forced to tell us what we want to hear."
|
| 416 |
+
(False Causality).</td></tr><tr><td>Religion</td><td>Wages</td><td>Law</td></tr><tr><td>Kids should not be exposed to prayer from other religions.
|
| 417 |
+
Religion is a personal choice. Prayer in school should be limited to the religion of the majority.
|
| 418 |
+
(Ad Populum).
|
| 419 |
+
People should follow whichever religion they choose.
|
| 420 |
+
Religion is not a good thing. It is divisive.
|
| 421 |
+
(Faulty Generalization).</td><td>Executives are hard working.
|
| 422 |
+
Executive has access to money which makes them not hardworking.
|
| 423 |
+
(False causality)
|
| 424 |
+
Payday loans are risky.
|
| 425 |
+
Pay day loans are not good for people.
|
| 426 |
+
(Faulty Generalization).</td><td>It's impossible to abolish capital punishment.
|
| 427 |
+
Capital punishment is the only way to ensure that people who commit crimes are punished.
|
| 428 |
+
(False Dilemma).
|
| 429 |
+
People should be able to choose their lawyer.
|
| 430 |
+
People don't know the difference between a good lawyer and a bad one.
|
| 431 |
+
(Faulty Generalization).</td></tr></table>
|
| 432 |
+
|
| 433 |
+
Figure 7: Examples of fallacious arguments generated at inference time by different models.
|
| 434 |
+
|
| 435 |
+
# E Using Humans to Classify Arguments
|
| 436 |
+
|
| 437 |
+
We use annotators to compute agreement scores with GPT-4's predictions, in order to justify the use of GPT-4 and its reliability. Another annotation task was conducted prior, and involved annotators tasked with classifying arguments into one of 13 fallacy types or as Not a Fallacy. Out Of 200 samples, each annotated by three different workers, a majority agreement was reached in only $47\%$ of cases. This is likely due to the inherent difficulty of accurately identifying logical fallacies and leads to a loss of information if no agreement is reached. As such, we believe the easiest way to involve humans in computing the fallacy-rate is to task them with a binary decision instead.
|
| 438 |
+
|
| 439 |
+
# F Fallacy-Informed Preference Optimization: Loss Definition
|
| 440 |
+
|
| 441 |
+
In our work, FIPO combines the CPO loss (Xu et al., 2024) for preference optimization with our classification loss. We selected CPO because it achieves the optimal balance between win-rate and fallacy-rate (Fig. 5, Tab. 5). Moreover, the CPO approach is designed to make the model learn rejections more accurately, which fits our study.
|
| 442 |
+
|
| 443 |
+
CPO Loss CPO is a reference-free preference optimization method. It extends DPO (Rafailov et al., 2023), modifying the DPO loss by using a Uniform model instead of a reference model. The DPO loss is
|
| 444 |
+
|
| 445 |
+
defined as:
|
| 446 |
+
|
| 447 |
+
$$
|
| 448 |
+
\mathcal {L} _ {\mathrm {D P O}} (\pi_ {\theta}, \pi_ {\mathrm {r e f}}) = - \mathbb {E} _ {(x, y _ {w}, y _ {l}) \sim \mathcal {D}} \left[ \log \sigma \left(\beta \log \frac {\pi_ {\theta} (y _ {w} | x)}{\pi_ {\mathrm {r e f}} (y _ {w} | x)} - \beta \log \frac {\pi_ {\theta} (y _ {l} | x)}{\pi_ {\mathrm {r e f}} (y _ {l} | x)}\right) \right]
|
| 449 |
+
$$
|
| 450 |
+
|
| 451 |
+
The CPO loss first approximates the DPO loss using a uniform reference model:
|
| 452 |
+
|
| 453 |
+
$$
|
| 454 |
+
\mathcal {L} (\pi_ {\theta}, U) = - \mathbb {E} _ {(x, y _ {w}, y _ {l}) \sim \mathcal {D}} \Big [ \log \sigma \Big (\beta \log \pi_ {\theta} (y _ {w} | x) - \beta \log \pi_ {\theta} (y _ {l} | x) \Big) \Big ]
|
| 455 |
+
$$
|
| 456 |
+
|
| 457 |
+
and defines the CPO loss as $\mathcal{L}_{\mathrm{CPO}}(\pi_{\theta}) = \min_{\theta}\left(\mathcal{L}(\pi_{\theta},U) - \mathbb{E}_{(x,y_w)\sim \mathcal{D}}[\log \pi_{\theta}(y_w|x)]\right)$ which combines the preference loss with a negative log-likelihood (NLL) term. The term ensures $\pi_{\theta}$ does not deviate from the preferred data distribution.
|
| 458 |
+
|
| 459 |
+
FIPO Loss Recall our preference dataset $\mathcal{D} = \{t^{(i)}, s^{(i)}, y_w^{(i)}, y_l^{(i)}, k^{(i)}\}$ , with $t$ the topic, $s$ the stance, $y_w$ the preferred valid argument, $y_l$ the dispreferred fallacious argument and $k$ the fallacy type of $y_l$ .
|
| 460 |
+
|
| 461 |
+
Now recall the definitions for our classification loss in Section 4.4. We define the probability $p_k$ of each fallacy type $k$ as the output of the linear layer after we feed it the last hidden state after the forward pass into the model $\pi_{\theta}$ for both the preferred and dispreferred samples. That is, we perform a forward pass with $y_w$ and $y_l$ and obtain the last hidden states from the last tokens: $\mathbf{h}_{\theta}(y_{\{w,l\}}|t,s) = \pi_{\theta}^{L,T}(y_{\{w,l\}}|t,s)$ where $L, T$ are the number of layers in the language model and the position of the last token respectively. From the hidden states, we compute the probabilities using:
|
| 462 |
+
|
| 463 |
+
$$
|
| 464 |
+
\mathbb {P} _ {\mathbf {h} _ {\theta}} ^ {k} \left(y _ {\{w, l \}} \mid t, s\right) = \operatorname {S o f t m a x} \left(\mathbf {W h} _ {\theta} \left(y _ {\{w, l \}} \mid t, s\right) + \mathbf {b}\right) _ {k}; \text {w i t h S o f t m a x} \left(z _ {i}\right) = \frac {e ^ {z _ {i}}}{\sum_ {j = 1} ^ {K} e ^ {z _ {j}}} \tag {7}
|
| 465 |
+
$$
|
| 466 |
+
|
| 467 |
+
where $W$ is the linear layer's weight matrix, $b$ is the corresponding bias term. This way, we can compute the log probabilities for the preferred and dispreferred samples $\psi_0(y_w|t,s) = \log \mathbb{P}_{\mathbf{h}_\theta}^0 (y_w|t,s)$ and $\psi_{k}(y_{l}|t,s) = \log \mathbb{P}_{\mathbf{h}_{\theta}}^{k}(y_{l}|t,s)$ respectively. With these log probabilities, we can compute a cross-entropy loss for both samples:
|
| 468 |
+
|
| 469 |
+
$$
|
| 470 |
+
\left\{ \begin{array}{l l} \mathcal {L} _ {w} & = - \mathbb {E} _ {(t, s, y _ {w}, y _ {l}, k)} [ \psi_ {0} (y _ {w} | t, s) ] \\ \mathcal {L} _ {l} & = - \mathbb {E} _ {(t, s, y _ {w}, y _ {l}, k)} [ \psi_ {k} (y _ {l} | t, s) ] \end{array} \right. \tag {8}
|
| 471 |
+
$$
|
| 472 |
+
|
| 473 |
+
Additionally, we use weights in our classification loss defined as the frequency of the fallacy types in our dataset to guide the model towards generating logical arguments by penalising fallacy errors more.
|
| 474 |
+
|
| 475 |
+
$$
|
| 476 |
+
w _ {k} = \frac {1}{| \mathcal {D} |} \sum_ {i = 1} ^ {| \mathcal {D} |} \mathbb {1} \left\{k ^ {(i)} = k \right\} \text {F o r f a l l a c y - t y p e s} k; w _ {0} = \min _ {k} w _ {k} \text {F o r t h e c l a s s ' N o t a F a l l a c y ' (c l a s s 0)}
|
| 477 |
+
$$
|
| 478 |
+
|
| 479 |
+
Our method introduces a weighted cross-entropy loss which we use in addition to the preference loss:
|
| 480 |
+
|
| 481 |
+
$$
|
| 482 |
+
\begin{array}{l} \mathcal {L} _ {\mathrm {C L F}} = w _ {0} \mathcal {L} _ {w} + w _ {k} \mathcal {L} _ {l} (9) \\ = - w _ {0} \mathbb {E} _ {(t, s, y _ {w}, y _ {l}, k) \sim \mathcal {D}} \left[ \psi_ {0} \left(y _ {w} \mid t, s\right) \right] - w _ {k} \mathbb {E} _ {(t, s, y _ {w}, y _ {l}, k) \sim \mathcal {D}} \left[ \psi_ {k} \left(y _ {l} \mid t, s\right) \right] (10) \\ = - \mathbb {E} _ {(t, s, y _ {w}, y _ {l}, k) \sim \mathcal {D}} \left[ w _ {0} \log \mathbb {P} _ {\mathbf {h} _ {\theta}} ^ {0} (y _ {w} | t, s) + w _ {k} \log \mathbb {P} _ {\mathbf {h} _ {\theta}} ^ {k} (y _ {l} | t, s) \right] (11) \\ \end{array}
|
| 483 |
+
$$
|
| 484 |
+
|
| 485 |
+
FIPO is thus reference-free and the loss $\mathcal{L}_{\mathrm{FIPO}}$ is then defined with the following term:
|
| 486 |
+
|
| 487 |
+
$$
|
| 488 |
+
\begin{array}{l} \mathcal {L} _ {\mathrm {F I P O}} = \mathcal {L} _ {\mathrm {C P O}} + \lambda \mathcal {L} _ {\mathrm {C L F}} \\ = - \mathbb {E} _ {(t, s, y _ {w}, y _ {l}, k) \sim \mathcal {D}} \left[ \underbrace {\min _ {\theta} \log \sigma \left(\beta \log \pi_ {\theta} (y _ {w} | t , s) - \beta \log \pi_ {\theta} (y _ {l} | t , s)\right) + \log \pi_ {\theta} (y _ {w} | t , s)} _ {\mathcal {L} _ {\mathrm {C P O} \text {t e m}}} \right. \\ \left. + \lambda \underbrace {\left(w _ {0} \log \mathbb {P} _ {\mathbf {h} _ {\theta}} ^ {0} \left(y _ {w} | t , s\right) + w _ {k} \log \mathbb {P} _ {\mathbf {h} _ {\theta}} ^ {k} \left(y _ {l} | t , s\right)\right)} _ {\mathcal {L} _ {\mathrm {C L F} \text {t e r m}}} \right] \\ \end{array}
|
| 489 |
+
$$
|
| 490 |
+
|
| 491 |
+
<table><tr><td>Term</td><td>Alternate Term (if any)</td><td>Definition</td></tr><tr><td>Faulty Generalization</td><td>N/A</td><td>When a conclusion is drawn about all or many instances of a phenomenon. It is an example of jumping to conclusions.</td></tr><tr><td>False Causality</td><td>N/A</td><td>Statement that jumps to a conclusion implying a causal relationship without supporting evidence.</td></tr><tr><td>Circular Reasoning</td><td>N/A</td><td>when the end of an argument comes back to the beginning without having proven itself.</td></tr><tr><td>Ad Populum</td><td>Appeal to Popularity</td><td>an argument based on affirming that something is real or better because the majority thinks so.</td></tr><tr><td>Ad Hominem</td><td>Personal Attack</td><td>Irrelevantly attack the person or some aspect of the person making the argument.</td></tr><tr><td>Fallacy of Logic</td><td>Logical Error</td><td>An error in the logical structure of an argument.</td></tr><tr><td>Appeal to Emotion</td><td>N/A</td><td>Manipulation of the recipient's emotions in order to win an argument</td></tr><tr><td>False Dilemma</td><td>Excluding Viable Alternatives</td><td>Presenting only two options or sides when there are many options or sides.</td></tr><tr><td>Equivocation</td><td>Use of Ambiguous Language</td><td>When a key term or phrase in an argument is used in an ambiguous way.</td></tr><tr><td>Fallacy of Extension</td><td>Exaggerating</td><td>Attacking an exaggerated or caricatured version of your opponent's position.</td></tr><tr><td>Fallacy of Relevance</td><td>Irrelevant Argument</td><td>Introducing premises or conclusions that have nothing to do with the subject matter.</td></tr><tr><td>Fallacy of Credibility</td><td>Attacking the Character of the Speaker</td><td>Attempts to disprove an argument by attacking the character of the speaker.</td></tr><tr><td>Intentional</td><td>Intentionally Wrong Argument</td><td>Some intentional (sometimes subconscious) action/choice to incorrectly support an argument</td></tr></table>
|
| 492 |
+
|
| 493 |
+
Table 9: Definitions of different types of logical fallacies.
|
| 494 |
+
|
| 495 |
+
<table><tr><td>Prompt</td><td>Golden</td><td>Fallacy</td></tr><tr><td>Generate a supporting argument for the topic: Cannabis should be legal.</td><td>It's not a bad thing to make marijuana more available.</td><td>Why should we be worrying about legal-izing cannabis when there are more important issues like poverty and hunger? (Fallacy of Relevance)</td></tr><tr><td>Generate a supporting argument for the topic: Urbanization is terrible for the planet.</td><td>Urbanization increases pollution.</td><td>Either we continue urbanization and destroy the planet, or we stop urbaniza-tion and hinder economic growth. (False Dilemma)</td></tr><tr><td>Generate a supporting argument for the topic: Research on embryonic stem cell should not be tax subsidized because for many it goes against their religious beliefs.</td><td>There are Christians who disagree with doing research on embryonic stem cells.</td><td>Those who support tax subsidies for em-bryonic stem cell research are godless and immoral. (Ad Hominem)</td></tr></table>
|
| 496 |
+
|
| 497 |
+
Table 10: Examples of samples from the preference dataset used for preference optimization. Golden arguments are retrieved from previous work (Saha et al., 2021), while fallacy arguments are generated using ChatGPT.
|
| 498 |
+
|
| 499 |
+
<table><tr><td>Artifacts</td><td>Citation</td><td>Link</td><td>License</td></tr><tr><td>LOGIC</td><td>(Jin et al., 2022)</td><td>https://github.com/causalNLP/logical-fallacy</td><td>N/A</td></tr><tr><td>EXPLAGRAPHS</td><td>(Saha et al., 2021)</td><td>https://explagraphs.github.io/</td><td>CC BY-SA 4.0</td></tr><tr><td>Debatepedia</td><td>(Cabrio and Villata, 2012)</td><td>N/A</td><td>N/A</td></tr><tr><td>PyTorch</td><td>(Paszke et al., 2019)</td><td>https://pytorch.org/</td><td>BSD-3 License</td></tr><tr><td>transformers</td><td>(Wolf et al., 2020)</td><td>https://huggingface.co/docs/transformers/index</td><td>Apache License 2.0</td></tr><tr><td>wandb</td><td>N/A</td><td>https://pypi.org/project/wandb/</td><td>MIT License</td></tr><tr><td>nltk</td><td>(Bird and Loper, 2004)</td><td>https://www.nltk.org/</td><td>Apache License 2.0</td></tr><tr><td>OpenAI API</td><td>N/A</td><td>https://platform.openai.com/docs/api-reference</td><td>MIT License</td></tr><tr><td>scikit-learn</td><td>(Pedregosa et al., 2011)</td><td>https://scikit-learn.org/stable/</td><td>BSD License</td></tr></table>
|
| 500 |
+
|
| 501 |
+
Table 11: Licenses of the artifacts utilized in this work, including datasets and major software packages, along with their respective licenses.
|
| 502 |
+
|
| 503 |
+
<table><tr><td colspan="3">Acceptance and Privacy Policies (click to expand/collapse)</td></tr><tr><td colspan="3">Instructions (click to expand/collapse)</td></tr><tr><td colspan="3">(WARNING: This HIT may contain adult content. Worker discretion is advised.)Thanks for participating in this HIT!</td></tr><tr><td colspan="3">Given a topic, a stance and a set of 2 arguments, you are asked to evaluate the quality and relevance of these arguments by selecting 1 of 4 choices. Both are equally good, Argument #1 is better, Argument #2 is better or Both are equally bad.Your task is to choose which, out of these different arguments, you consider to be more relevant, with respect to a given topic and a stance, taking three criteria into account:1. Your preferred argument properly addresses the matter in the topic.2. Your preferred argument corresponds to the given stance.3. Two arguments are equally good, if both properly address the topic and both are logical and relevant. Two arguments are equally bad, if both are irrelevant or illogical.If you think all the arguments are equally good, or equally bad, you may select Both are equally good or Both are equally bad. An example would be:· Topic: There should be no age restrictions on cosmetic surgery.· Stance: Supporting.· Argument 1: The American Medical Association recommends age restrictions on cosmetic procedures be lifted.· Argument 2: Cosmetics surgery can help with self-esteem.With this example, we can better define all elements involved:1. A Topic refers to the subject or issue that is being discussed. It is the focal point around which the argument revolves,2. The stance refers to a position or perspective that an individual takes regarding a particular topic. It represents their viewpoint, beliefs, or opinions on the subject matter being discussed. In our case, the stance will be either supporting or counter3. The argument is a reasoned or logical presentation of ideas or opinions aimed at persuading others or establishing a particular viewpoint as valid or true. It typically involves the exchange of assertions, evidence, and reasoning to support a claim or position.</td></tr><tr><td colspan="3">Dos and Don'ts (click to expand/collapse)</td></tr><tr><td colspan="3">Examples (click to expand/collapse)</td></tr><tr><td colspan="3">Recall that:You must select one of four choices.If both arguments properly address the topic and both are logical and relevant, pick Both are equally good.If the first argument is better or more relevant, pick Argument #1 is better.If the second is better or more relevant, pick Argument #2 is better.If both arguments are irrelevant or illogical, pick Both are equally bad.1)Topic: $(topic)Stance: $(stance)Argument #1: $(argument1)Argument #2: $(argument2)Which argument is most relevant with respect to the topic?Both are equally good Argument #1 is better Argument #2 is better Both are equally bad</td></tr><tr><td colspan="3">(Optional) Please let us know if anything was unclear, if you experienced any issues, or if you have any other feedback for us.</td></tr></table>
|
| 504 |
+
|
| 505 |
+
Figure 8: User interface used for human annotation on Amazon Mechanical Turk, where annotators compare pairs of arguments generated by different models to determine the superior argument.
|
2025/A Logical Fallacy-Informed Framework for Argument Generation/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:2d8353baaf9a9885754477c349ec3b6e1ec4e606fc39b815fff652b626247654
|
| 3 |
+
size 1196401
|
2025/A Logical Fallacy-Informed Framework for Argument Generation/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2025/A Mixed-Language Multi-Document News Summarization Dataset and a Graphs-Based Extract-Generate Model/a6e57a98-a206-4f15-a537-4cd8fddecece_content_list.json
ADDED
|
@@ -0,0 +1,1668 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[
|
| 2 |
+
{
|
| 3 |
+
"type": "text",
|
| 4 |
+
"text": "A Mixed-Language Multi-Document News Summarization Dataset and a Graphs-Based Extract-Generate Model",
|
| 5 |
+
"text_level": 1,
|
| 6 |
+
"bbox": [
|
| 7 |
+
129,
|
| 8 |
+
83,
|
| 9 |
+
865,
|
| 10 |
+
122
|
| 11 |
+
],
|
| 12 |
+
"page_idx": 0
|
| 13 |
+
},
|
| 14 |
+
{
|
| 15 |
+
"type": "text",
|
| 16 |
+
"text": "Shengxiang Gao $^{1,2}$ , Fang Nan $^{1,2}$ , Yongbing Zhang $^{1,2}$ , Yuxin Huang $^{1,2}$ , Kaiwen Tan $^{1,2*}$ , Zhengtao Yu $^{1,2}$",
|
| 17 |
+
"bbox": [
|
| 18 |
+
275,
|
| 19 |
+
131,
|
| 20 |
+
727,
|
| 21 |
+
168
|
| 22 |
+
],
|
| 23 |
+
"page_idx": 0
|
| 24 |
+
},
|
| 25 |
+
{
|
| 26 |
+
"type": "text",
|
| 27 |
+
"text": "$^{1}$ Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming, China $^{2}$ Yunnan Key Laboratory of Artificial Intelligence, Kunming, China {kwtan}@kust.edu.cn",
|
| 28 |
+
"bbox": [
|
| 29 |
+
223,
|
| 30 |
+
171,
|
| 31 |
+
773,
|
| 32 |
+
239
|
| 33 |
+
],
|
| 34 |
+
"page_idx": 0
|
| 35 |
+
},
|
| 36 |
+
{
|
| 37 |
+
"type": "text",
|
| 38 |
+
"text": "Abstract",
|
| 39 |
+
"text_level": 1,
|
| 40 |
+
"bbox": [
|
| 41 |
+
260,
|
| 42 |
+
260,
|
| 43 |
+
339,
|
| 44 |
+
275
|
| 45 |
+
],
|
| 46 |
+
"page_idx": 0
|
| 47 |
+
},
|
| 48 |
+
{
|
| 49 |
+
"type": "text",
|
| 50 |
+
"text": "Existing research on news summarization primarily focuses on single-language single-document (SLSD), single-language multi-document (SLMD) or cross-language single-document (CLSD). However, in real-world scenarios, news about an international event often involves multiple documents in different languages, i.e., mixed-language multi-document (MLMD). Therefore, summarizing MLMD news is of great significance. However, the lack of datasets for MLMD news summarization has constrained the development of research in this area. To fill this gap, we construct a mixed-language multi-document news summarization dataset (MLMD-news), which contains four different languages and 10,992 source document cluster and target summary pairs. Additionally, we propose a graph-based extract-generate model and benchmark various methods on the MLMD-news dataset and publicly release our dataset and code<sup>1</sup>, aiming to advance research in summarization within MLMD scenarios.",
|
| 51 |
+
"bbox": [
|
| 52 |
+
141,
|
| 53 |
+
290,
|
| 54 |
+
460,
|
| 55 |
+
602
|
| 56 |
+
],
|
| 57 |
+
"page_idx": 0
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"type": "text",
|
| 61 |
+
"text": "1 Introduction",
|
| 62 |
+
"text_level": 1,
|
| 63 |
+
"bbox": [
|
| 64 |
+
114,
|
| 65 |
+
617,
|
| 66 |
+
258,
|
| 67 |
+
632
|
| 68 |
+
],
|
| 69 |
+
"page_idx": 0
|
| 70 |
+
},
|
| 71 |
+
{
|
| 72 |
+
"type": "text",
|
| 73 |
+
"text": "The news summarization task aims to simplify and condense a large volume of news content through automated methods, extracting key information and main viewpoints so that readers can quickly grasp the core content of the news. Existing research on news summarization primarily focuses on single-language single-document (SLSD)(Svore et al., 2007; Litvak and Last, 2008; Liu and Lapata, 2019), single-language multi-document (SLMD)(Haghighi and Vanderwende, 2009; Yasunaga et al., 2017; Wang et al., 2009) and cross-language single-document (CLSD) (Wan et al., 2010; Wan, 2011; Wan et al., 2019). However, in reality, many news articles, especially international news, appear in the form of mixed",
|
| 74 |
+
"bbox": [
|
| 75 |
+
112,
|
| 76 |
+
643,
|
| 77 |
+
489,
|
| 78 |
+
885
|
| 79 |
+
],
|
| 80 |
+
"page_idx": 0
|
| 81 |
+
},
|
| 82 |
+
{
|
| 83 |
+
"type": "image",
|
| 84 |
+
"img_path": "images/e2ba99158d6e8f4bf1b334a1f4e3fb01f2a9a4018d33c953b352919d9fe850de.jpg",
|
| 85 |
+
"image_caption": [
|
| 86 |
+
"Figure 1: The diagram of SLSD, SLMD, CLSD and MLMD. Each rounded rectangle represents a source document, while the pointed rectangle represents the target summary. \"En\", \"De\", \"Fr\" and \"Es\" indicate that the text is in English, German, French, and Spanish, respectively."
|
| 87 |
+
],
|
| 88 |
+
"image_footnote": [],
|
| 89 |
+
"bbox": [
|
| 90 |
+
512,
|
| 91 |
+
258,
|
| 92 |
+
692,
|
| 93 |
+
423
|
| 94 |
+
],
|
| 95 |
+
"page_idx": 0
|
| 96 |
+
},
|
| 97 |
+
{
|
| 98 |
+
"type": "image",
|
| 99 |
+
"img_path": "images/375b3535aa90ea2ac617b66ae0a207c179564fd76ec7d84766ce8b3953b1e5ff.jpg",
|
| 100 |
+
"image_caption": [],
|
| 101 |
+
"image_footnote": [],
|
| 102 |
+
"bbox": [
|
| 103 |
+
699,
|
| 104 |
+
258,
|
| 105 |
+
878,
|
| 106 |
+
422
|
| 107 |
+
],
|
| 108 |
+
"page_idx": 0
|
| 109 |
+
},
|
| 110 |
+
{
|
| 111 |
+
"type": "text",
|
| 112 |
+
"text": "language multi-document (MLMD). Figure 1 illustrates the four tasks: SLSD, SLMD, CLSD, and MLMD.",
|
| 113 |
+
"bbox": [
|
| 114 |
+
507,
|
| 115 |
+
549,
|
| 116 |
+
882,
|
| 117 |
+
595
|
| 118 |
+
],
|
| 119 |
+
"page_idx": 0
|
| 120 |
+
},
|
| 121 |
+
{
|
| 122 |
+
"type": "text",
|
| 123 |
+
"text": "It is noteworthy that, with the advancement of multi-language models such as mBART (Tang et al., 2020) and GPT (Floridi and Chiriatti, 2020; Achiam et al., 2023), a task referred to as multi-language multi-document news summarization has recently emerged (Giannakopoulos, 2013; Zopf, 2018; Mascarell et al., 2023). In the task, although the languages of different source document clusters vary, each individual source document cluster consists of multiple documents in the same language. Therefore, in each instance of summary generation for this type of task, it essentially falls under the category SLMD. In contrast, in MLMD, each individual source document cluster is composed of multiple documents in different languages. From this perspective, MLMD is more challenging than multi-language multi-document. The latter requires the model to have the capability to understand multiple documents in the current language during a single summary generation. In contrast,",
|
| 124 |
+
"bbox": [
|
| 125 |
+
507,
|
| 126 |
+
599,
|
| 127 |
+
884,
|
| 128 |
+
921
|
| 129 |
+
],
|
| 130 |
+
"page_idx": 0
|
| 131 |
+
},
|
| 132 |
+
{
|
| 133 |
+
"type": "page_footnote",
|
| 134 |
+
"text": "\\*Corresponding author. 1https://github.com/Southnf9/MLMD-news",
|
| 135 |
+
"bbox": [
|
| 136 |
+
134,
|
| 137 |
+
892,
|
| 138 |
+
426,
|
| 139 |
+
920
|
| 140 |
+
],
|
| 141 |
+
"page_idx": 0
|
| 142 |
+
},
|
| 143 |
+
{
|
| 144 |
+
"type": "page_number",
|
| 145 |
+
"text": "9255",
|
| 146 |
+
"bbox": [
|
| 147 |
+
478,
|
| 148 |
+
927,
|
| 149 |
+
519,
|
| 150 |
+
940
|
| 151 |
+
],
|
| 152 |
+
"page_idx": 0
|
| 153 |
+
},
|
| 154 |
+
{
|
| 155 |
+
"type": "footer",
|
| 156 |
+
"text": "Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies",
|
| 157 |
+
"bbox": [
|
| 158 |
+
16,
|
| 159 |
+
945,
|
| 160 |
+
976,
|
| 161 |
+
958
|
| 162 |
+
],
|
| 163 |
+
"page_idx": 0
|
| 164 |
+
},
|
| 165 |
+
{
|
| 166 |
+
"type": "footer",
|
| 167 |
+
"text": "(Volume 1: Long Papers), pages 9255-9265",
|
| 168 |
+
"bbox": [
|
| 169 |
+
364,
|
| 170 |
+
958,
|
| 171 |
+
633,
|
| 172 |
+
971
|
| 173 |
+
],
|
| 174 |
+
"page_idx": 0
|
| 175 |
+
},
|
| 176 |
+
{
|
| 177 |
+
"type": "footer",
|
| 178 |
+
"text": "April 29 - May 4, 2025 ©2025 Association for Computational Linguistics",
|
| 179 |
+
"bbox": [
|
| 180 |
+
273,
|
| 181 |
+
972,
|
| 182 |
+
722,
|
| 183 |
+
985
|
| 184 |
+
],
|
| 185 |
+
"page_idx": 0
|
| 186 |
+
},
|
| 187 |
+
{
|
| 188 |
+
"type": "text",
|
| 189 |
+
"text": "MLMD requires the model to simultaneously possess the ability to understand multiple languages and multiple documents within a single summary generation.",
|
| 190 |
+
"bbox": [
|
| 191 |
+
112,
|
| 192 |
+
84,
|
| 193 |
+
487,
|
| 194 |
+
148
|
| 195 |
+
],
|
| 196 |
+
"page_idx": 1
|
| 197 |
+
},
|
| 198 |
+
{
|
| 199 |
+
"type": "text",
|
| 200 |
+
"text": "However, the lack of MLMD news datasets has hindered progress in this field. Therefore, we first construct a MLMD-news dataset. This dataset includes documents in four languages: English, German, French, and Spanish, with a total of 10,992 source document clusters and corresponding target summaries. Each source document cluster is composed of multiple documents in different languages, and the corresponding target summary is in English. Additionally, we propose a graph-based extract-generate model for the MLMD task. This model first uses an extractor based on graph neural networks to extract key sentences from a source document cluster, and then employs a generator based on pre-trained models to generate the target summary based on these key sentences. Finally, we benchmark various methods on the MLMD-news dataset and publicly release our dataset and code to advance research in summarization within MLMD scenarios. The contributions of this paper are summarized as follows:",
|
| 201 |
+
"bbox": [
|
| 202 |
+
115,
|
| 203 |
+
149,
|
| 204 |
+
489,
|
| 205 |
+
485
|
| 206 |
+
],
|
| 207 |
+
"page_idx": 1
|
| 208 |
+
},
|
| 209 |
+
{
|
| 210 |
+
"type": "list",
|
| 211 |
+
"sub_type": "text",
|
| 212 |
+
"list_items": [
|
| 213 |
+
"- We construct the first mixed-language multi-document (MLMD) dataset, where each source document cluster contains multiple news documents in different languages.",
|
| 214 |
+
"- We propose a graph-based extract-generate model as a benchmark for MLMD.",
|
| 215 |
+
"- We perform benchmark experiments on the MLMD using various methods and have publicly released the dataset and code to advance research in this field."
|
| 216 |
+
],
|
| 217 |
+
"bbox": [
|
| 218 |
+
136,
|
| 219 |
+
495,
|
| 220 |
+
485,
|
| 221 |
+
673
|
| 222 |
+
],
|
| 223 |
+
"page_idx": 1
|
| 224 |
+
},
|
| 225 |
+
{
|
| 226 |
+
"type": "text",
|
| 227 |
+
"text": "2 Related Work",
|
| 228 |
+
"text_level": 1,
|
| 229 |
+
"bbox": [
|
| 230 |
+
114,
|
| 231 |
+
686,
|
| 232 |
+
270,
|
| 233 |
+
702
|
| 234 |
+
],
|
| 235 |
+
"page_idx": 1
|
| 236 |
+
},
|
| 237 |
+
{
|
| 238 |
+
"type": "text",
|
| 239 |
+
"text": "The related work in news summarization research primarily focuses on three areas, as detailed below:",
|
| 240 |
+
"bbox": [
|
| 241 |
+
112,
|
| 242 |
+
712,
|
| 243 |
+
487,
|
| 244 |
+
743
|
| 245 |
+
],
|
| 246 |
+
"page_idx": 1
|
| 247 |
+
},
|
| 248 |
+
{
|
| 249 |
+
"type": "text",
|
| 250 |
+
"text": "Single-Language Single-Document Summarization (SLSD): As shown in Figure 1.(a), the SLSD news summarization task takes a source document as input and outputs a target summary in the same language. Existing methods are mainly divided into two categories: extractive and abstractive. Extractive summarization constructs the target summary by directly selecting key sentences or paragraphs from the source document, such as TextRank (Mihalcea and Tarau, 2004) and DeepSumm (Joshi et al., 2023). Abstractive summarization, on",
|
| 251 |
+
"bbox": [
|
| 252 |
+
112,
|
| 253 |
+
745,
|
| 254 |
+
489,
|
| 255 |
+
920
|
| 256 |
+
],
|
| 257 |
+
"page_idx": 1
|
| 258 |
+
},
|
| 259 |
+
{
|
| 260 |
+
"type": "text",
|
| 261 |
+
"text": "the other hand, involves first understanding the content of the source document and then generating new summary sentences for the target summary, such as BERTSUM (Liu and Lapata, 2019) and COGITOERGOSUMM (Frisoni et al., 2023).",
|
| 262 |
+
"bbox": [
|
| 263 |
+
507,
|
| 264 |
+
84,
|
| 265 |
+
882,
|
| 266 |
+
164
|
| 267 |
+
],
|
| 268 |
+
"page_idx": 1
|
| 269 |
+
},
|
| 270 |
+
{
|
| 271 |
+
"type": "text",
|
| 272 |
+
"text": "Cross-Language Single-Document Summarization (CLSD): As shown in Figure 1.(b), the CLSD news summarization task takes a source document as input and produces a target summary in a different language. Existing research is primarily divided into pipeline-based and end-to-end. Traditional CLSD methods typically use a pipeline-based methods (Boudin et al., 2011; Linhares Pontes et al., 2018), where the source document is first translated and then summarized, or the summary is generated first and then translated into the target language. In recent years, researchers have increasingly focused on end-to-end CLSD methods (Le, 2024; Cai and Yuan, 2024), which can directly generate summaries in the target language, significantly reducing the risk of error propagation.",
|
| 273 |
+
"bbox": [
|
| 274 |
+
507,
|
| 275 |
+
165,
|
| 276 |
+
884,
|
| 277 |
+
437
|
| 278 |
+
],
|
| 279 |
+
"page_idx": 1
|
| 280 |
+
},
|
| 281 |
+
{
|
| 282 |
+
"type": "text",
|
| 283 |
+
"text": "Single-Language Multi-Document Summarization (SLMD): As shown in Figure 1.(c), the SLMD news summarization task takes a source document cluster as input, which contains multiple documents, and the output is a target summary in the same language. Existing methods can be categorized into extractive, abstractive, and hybrid. In the early days, due to the small sample size of SLMD datasets like DUC 2004 (Over and Yen, 2004), research on multi-document summarization primarily relied on extractive methods (Mei and Chen, 2012; Wan et al., 2015). In recent years, the emergence of large-scale SLMD datasets, such as Multi-News(Fabbri et al., 2019), has accelerated the development of abstractive (Jin and Wan, 2020; Liu et al., 2021) and hybrid SLMD news summarization (Celikyilmaz and Hakkani-Tur, 2010; Song et al., 2022; Ghadimi and Beigy, 2022).",
|
| 284 |
+
"bbox": [
|
| 285 |
+
507,
|
| 286 |
+
438,
|
| 287 |
+
882,
|
| 288 |
+
727
|
| 289 |
+
],
|
| 290 |
+
"page_idx": 1
|
| 291 |
+
},
|
| 292 |
+
{
|
| 293 |
+
"type": "text",
|
| 294 |
+
"text": "Recently, with the development of multi-language models such as mBART (Tang et al., 2020) and GPT (Floridi and Chiriatti, 2020; Achiam et al., 2023), a task known as multi-language multi-document news summarization (Giannakopoulos, 2013; Zopf, 2018; Mascarell et al., 2023) has emerged within the SLMD paradigm. This task involves inputs and outputs similar to those in SLMD, where a source document cluster is input and a target summary in the same language is produced. The difference lies in that the languages of different source document clusters can",
|
| 295 |
+
"bbox": [
|
| 296 |
+
507,
|
| 297 |
+
728,
|
| 298 |
+
882,
|
| 299 |
+
920
|
| 300 |
+
],
|
| 301 |
+
"page_idx": 1
|
| 302 |
+
},
|
| 303 |
+
{
|
| 304 |
+
"type": "page_number",
|
| 305 |
+
"text": "9256",
|
| 306 |
+
"bbox": [
|
| 307 |
+
480,
|
| 308 |
+
928,
|
| 309 |
+
519,
|
| 310 |
+
940
|
| 311 |
+
],
|
| 312 |
+
"page_idx": 1
|
| 313 |
+
},
|
| 314 |
+
{
|
| 315 |
+
"type": "image",
|
| 316 |
+
"img_path": "images/c21237543a3df4971976db4da8b47435fb4fe6f38e1c299132cb669eb7bfe100.jpg",
|
| 317 |
+
"image_caption": [
|
| 318 |
+
"Figure 2: The diagram illustrates the construction process of the MLMD-news dataset. First, a round-trip translation (RTT) strategy is employed to translate each news document in the source document clusters of the Multi-News dataset into multiple languages and then back into the original language. This process allows the calculation of the ROUGE-1 score matrix for the document cluster. Based on this score matrix, a greedy selection strategy is used to assign a corresponding language to each news document. The original content of the news document is then replaced with the translated content in the assigned language, resulting in a source document cluster with mixed languages."
|
| 319 |
+
],
|
| 320 |
+
"image_footnote": [],
|
| 321 |
+
"bbox": [
|
| 322 |
+
115,
|
| 323 |
+
84,
|
| 324 |
+
485,
|
| 325 |
+
280
|
| 326 |
+
],
|
| 327 |
+
"page_idx": 2
|
| 328 |
+
},
|
| 329 |
+
{
|
| 330 |
+
"type": "text",
|
| 331 |
+
"text": "vary, thereby further requiring the model to have multilingual understanding capabilities.",
|
| 332 |
+
"bbox": [
|
| 333 |
+
112,
|
| 334 |
+
521,
|
| 335 |
+
487,
|
| 336 |
+
555
|
| 337 |
+
],
|
| 338 |
+
"page_idx": 2
|
| 339 |
+
},
|
| 340 |
+
{
|
| 341 |
+
"type": "text",
|
| 342 |
+
"text": "3 The MLMD-news dataset",
|
| 343 |
+
"text_level": 1,
|
| 344 |
+
"bbox": [
|
| 345 |
+
114,
|
| 346 |
+
599,
|
| 347 |
+
369,
|
| 348 |
+
614
|
| 349 |
+
],
|
| 350 |
+
"page_idx": 2
|
| 351 |
+
},
|
| 352 |
+
{
|
| 353 |
+
"type": "text",
|
| 354 |
+
"text": "The overall process of constructing the MLMD-news dataset is illustrated in Figure 2. The MLMD-news dataset is built upon the Multi-News dataset, which is a well-known and widely used English multi-document summarization dataset. The construction process employs a round-trip translation strategy and a greedy selection strategy, to approximate real-world cases. Similar methods, like the NCLS dataset (Zhu et al., 2019) in cross-language summarization, have been widely accepted and used. The main goal of the round-trip translation strategy is to calculate a ROUGE-1 score matrix that reflects translation quality, while the greedy selection strategy is used to assign the most suitable language to each news document in the source document cluster and make the necessary replacements.",
|
| 355 |
+
"bbox": [
|
| 356 |
+
112,
|
| 357 |
+
646,
|
| 358 |
+
489,
|
| 359 |
+
919
|
| 360 |
+
],
|
| 361 |
+
"page_idx": 2
|
| 362 |
+
},
|
| 363 |
+
{
|
| 364 |
+
"type": "text",
|
| 365 |
+
"text": "3.1 Round-trip Translation Strategy",
|
| 366 |
+
"text_level": 1,
|
| 367 |
+
"bbox": [
|
| 368 |
+
507,
|
| 369 |
+
84,
|
| 370 |
+
810,
|
| 371 |
+
101
|
| 372 |
+
],
|
| 373 |
+
"page_idx": 2
|
| 374 |
+
},
|
| 375 |
+
{
|
| 376 |
+
"type": "text",
|
| 377 |
+
"text": "The round-trip translation strategy first uses machine translation services² to translate text from the original language into another language (forward translation) and then uses machine translation services again to translate the text back from the other language into the original language (back translation). This strategy has been utilized by Zhu et al. (2019) to construct cross-language single-document summarization datasets from single-language single-document summarization datasets.",
|
| 378 |
+
"bbox": [
|
| 379 |
+
507,
|
| 380 |
+
105,
|
| 381 |
+
884,
|
| 382 |
+
266
|
| 383 |
+
],
|
| 384 |
+
"page_idx": 2
|
| 385 |
+
},
|
| 386 |
+
{
|
| 387 |
+
"type": "text",
|
| 388 |
+
"text": "Therefore, we use the round-trip translation strategy to construct MLMD-news dataset. First, the original English news documents from the MultiNews dataset are translated into Spanish, French, and German through forward translation. Then, these translated documents are back-translated into English. The English documents obtained from the back translation of each language are compared with the original English news documents, and ROUGE-1 scores are calculated. If the ROUGE-1 score for a particular language is below a threshold, it is set to zero. Conversely, if the ROUGE-1 score is equal to or above the threshold, the score is retained, resulting in a ROUGE-1 score matrix (where each row represents a document and each column corresponds to a language).",
|
| 389 |
+
"bbox": [
|
| 390 |
+
507,
|
| 391 |
+
267,
|
| 392 |
+
884,
|
| 393 |
+
525
|
| 394 |
+
],
|
| 395 |
+
"page_idx": 2
|
| 396 |
+
},
|
| 397 |
+
{
|
| 398 |
+
"type": "text",
|
| 399 |
+
"text": "3.2 Greedy Selection Strategy",
|
| 400 |
+
"text_level": 1,
|
| 401 |
+
"bbox": [
|
| 402 |
+
507,
|
| 403 |
+
537,
|
| 404 |
+
759,
|
| 405 |
+
552
|
| 406 |
+
],
|
| 407 |
+
"page_idx": 2
|
| 408 |
+
},
|
| 409 |
+
{
|
| 410 |
+
"type": "text",
|
| 411 |
+
"text": "As shown in Figure 2, after obtaining the ROUGE-1 score matrix, a greedy selection strategy is used to assign a language to each news document in the document cluster from Multi-News dataset. Specifically, this involves first identifying the row and column of the maximum value in the matrix, and assigning the language indicated by the column to the document indicated by the row. The corresponding row and column are then removed to form a new submatrix. This process is repeated until all news documents have been assigned a language. If at any step all values in the matrix are found to be zero, the language of remaining news documents in the submatrix is assigned as English. After completing the language assignment, each news document is transformed into the assigned language using the forward translation of the round-trip translation, replacing the content of the source document. If the assigned language is English, the document remains in its original English form. This results in a mixed-language document cluster.",
|
| 412 |
+
"bbox": [
|
| 413 |
+
507,
|
| 414 |
+
558,
|
| 415 |
+
884,
|
| 416 |
+
896
|
| 417 |
+
],
|
| 418 |
+
"page_idx": 2
|
| 419 |
+
},
|
| 420 |
+
{
|
| 421 |
+
"type": "page_footnote",
|
| 422 |
+
"text": "$^{2}$ https://cloud.google.com/translate",
|
| 423 |
+
"bbox": [
|
| 424 |
+
529,
|
| 425 |
+
906,
|
| 426 |
+
798,
|
| 427 |
+
920
|
| 428 |
+
],
|
| 429 |
+
"page_idx": 2
|
| 430 |
+
},
|
| 431 |
+
{
|
| 432 |
+
"type": "page_number",
|
| 433 |
+
"text": "9257",
|
| 434 |
+
"bbox": [
|
| 435 |
+
480,
|
| 436 |
+
927,
|
| 437 |
+
519,
|
| 438 |
+
940
|
| 439 |
+
],
|
| 440 |
+
"page_idx": 2
|
| 441 |
+
},
|
| 442 |
+
{
|
| 443 |
+
"type": "table",
|
| 444 |
+
"img_path": "images/9d082352ef30e95c18cdb9a6eb947f2518239def650b5aa0d6e9fe69e26de7c4.jpg",
|
| 445 |
+
"table_caption": [],
|
| 446 |
+
"table_footnote": [
|
| 447 |
+
"Table 1: Statistics of the MLMD-news dataset. \"#\" represents the number of source document cluster and target summary pairs. \"Avg.Doc\", \"Avg ClusterWords\" and \"Avg.ClusterSents\" indicate the average number of documents, average number of tokens, and average number of sentences per source document cluster, respectively. \"Avg.SumWords\" and \"Avg.SumSents\" denote the average number of tokens and average number of sentences in the target summary. \"Count\", \"Avg.DocWords\" and \"Avg.DocSents\" represent the total number of documents, average number of tokens per document, and average number of sentences per document, respectively."
|
| 448 |
+
],
|
| 449 |
+
"table_body": "<table><tr><td></td><td></td><td>Train</td><td>Vaild</td><td>Test</td></tr><tr><td rowspan=\"6\">Total</td><td>#</td><td>8444</td><td>1277</td><td>1271</td></tr><tr><td>Avg.Doc</td><td>2.79</td><td>2.75</td><td>2.71</td></tr><tr><td>Avg ClusterWords</td><td>2442.97</td><td>2457.48</td><td>2255.81</td></tr><tr><td>Avg ClusterSents</td><td>84.14</td><td>85.49</td><td>77.85</td></tr><tr><td>Avg.SumWords</td><td>269.17</td><td>268.32</td><td>265.70</td></tr><tr><td>Avg.SumSents</td><td>9.70</td><td>9.60</td><td>9.56</td></tr><tr><td rowspan=\"3\">En</td><td>Count</td><td>7088</td><td>1027</td><td>1009</td></tr><tr><td>Avg.DocWords</td><td>653.19</td><td>732.79</td><td>706.98</td></tr><tr><td>Avg.DocSents</td><td>24.52</td><td>27.18</td><td>26.00</td></tr><tr><td rowspan=\"3\">Fr</td><td>Count</td><td>5307</td><td>779</td><td>779</td></tr><tr><td>Avg.DocWords</td><td>1020.20</td><td>969.46</td><td>951.30</td></tr><tr><td>Avg.DocSents</td><td>32.99</td><td>31.93</td><td>31.53</td></tr><tr><td rowspan=\"3\">De</td><td>Count</td><td>4431</td><td>693</td><td>646</td></tr><tr><td>Avg.DocWords</td><td>981.05</td><td>1036.24</td><td>825.07</td></tr><tr><td>Avg.DocSents</td><td>32.96</td><td>35.34</td><td>26.93</td></tr><tr><td rowspan=\"3\">Es</td><td>Count</td><td>6769</td><td>1015</td><td>1009</td></tr><tr><td>Avg.DocWords</td><td>1047.78</td><td>1048.92</td><td>1015.58</td></tr><tr><td>Avg.DocSents</td><td>36.20</td><td>36.64</td><td>35.44</td></tr></table>",
|
| 450 |
+
"bbox": [
|
| 451 |
+
115,
|
| 452 |
+
80,
|
| 453 |
+
494,
|
| 454 |
+
373
|
| 455 |
+
],
|
| 456 |
+
"page_idx": 3
|
| 457 |
+
},
|
| 458 |
+
{
|
| 459 |
+
"type": "text",
|
| 460 |
+
"text": "Finally, this mixed-language document cluster is combined with the original target summary to form an MLMD summary pair.",
|
| 461 |
+
"bbox": [
|
| 462 |
+
112,
|
| 463 |
+
594,
|
| 464 |
+
487,
|
| 465 |
+
644
|
| 466 |
+
],
|
| 467 |
+
"page_idx": 3
|
| 468 |
+
},
|
| 469 |
+
{
|
| 470 |
+
"type": "text",
|
| 471 |
+
"text": "3.3 Statistics and Analysis",
|
| 472 |
+
"text_level": 1,
|
| 473 |
+
"bbox": [
|
| 474 |
+
112,
|
| 475 |
+
657,
|
| 476 |
+
337,
|
| 477 |
+
671
|
| 478 |
+
],
|
| 479 |
+
"page_idx": 3
|
| 480 |
+
},
|
| 481 |
+
{
|
| 482 |
+
"type": "text",
|
| 483 |
+
"text": "Through the aforementioned process, we constructed the MLMD-news dataset, which contains 10,992 pairs of source document clusters and target summaries. The source document clusters include four languages: English, French, German, and Spanish, while the target summaries are all in English. The dataset was divided into training, validation, and test sets. Table 1 presents the statistical information of the MLMD-news dataset. Figure 3 shows the number of news documents in different languages across the training, validation, and test sets. Due to the quality control implemented through the round-trip translation strategy during processing, there are differences in the proportions of news documents in different languages.",
|
| 484 |
+
"bbox": [
|
| 485 |
+
112,
|
| 486 |
+
680,
|
| 487 |
+
489,
|
| 488 |
+
921
|
| 489 |
+
],
|
| 490 |
+
"page_idx": 3
|
| 491 |
+
},
|
| 492 |
+
{
|
| 493 |
+
"type": "image",
|
| 494 |
+
"img_path": "images/faa39801130f9db0b3a7f93347b4256ee6571943af9b2cb2e3509b3b041ec09e.jpg",
|
| 495 |
+
"image_caption": [
|
| 496 |
+
"Figure 3: The number of news documents in different languages across the training, validation, and test sets."
|
| 497 |
+
],
|
| 498 |
+
"image_footnote": [],
|
| 499 |
+
"bbox": [
|
| 500 |
+
512,
|
| 501 |
+
82,
|
| 502 |
+
882,
|
| 503 |
+
181
|
| 504 |
+
],
|
| 505 |
+
"page_idx": 3
|
| 506 |
+
},
|
| 507 |
+
{
|
| 508 |
+
"type": "image",
|
| 509 |
+
"img_path": "images/b75a603208199ece6d3d37b31946376929500fef47a177ea9f1b4ac48f264980.jpg",
|
| 510 |
+
"image_caption": [
|
| 511 |
+
"Figure 4: The proportion of the number of languages involved in the source document clusters across the training, validation, and test sets."
|
| 512 |
+
],
|
| 513 |
+
"image_footnote": [],
|
| 514 |
+
"bbox": [
|
| 515 |
+
510,
|
| 516 |
+
243,
|
| 517 |
+
878,
|
| 518 |
+
361
|
| 519 |
+
],
|
| 520 |
+
"page_idx": 3
|
| 521 |
+
},
|
| 522 |
+
{
|
| 523 |
+
"type": "text",
|
| 524 |
+
"text": "The number of English and Spanish documents is roughly equal, French ranks third, while German has the fewest articles. Figure 4 illustrates the proportion of the number of languages involved in the source document clusters across the training, validation, and test sets. The proportions are relatively consistent, indicating a stable language distribution. Source document clusters containing 2 languages are the most common, while those containing 4 languages are the least common.",
|
| 525 |
+
"bbox": [
|
| 526 |
+
507,
|
| 527 |
+
445,
|
| 528 |
+
882,
|
| 529 |
+
607
|
| 530 |
+
],
|
| 531 |
+
"page_idx": 3
|
| 532 |
+
},
|
| 533 |
+
{
|
| 534 |
+
"type": "text",
|
| 535 |
+
"text": "4 Graph-based Extract-Generate Model",
|
| 536 |
+
"text_level": 1,
|
| 537 |
+
"bbox": [
|
| 538 |
+
507,
|
| 539 |
+
630,
|
| 540 |
+
873,
|
| 541 |
+
646
|
| 542 |
+
],
|
| 543 |
+
"page_idx": 3
|
| 544 |
+
},
|
| 545 |
+
{
|
| 546 |
+
"type": "text",
|
| 547 |
+
"text": "To the best of our knowledge, there is currently no method specifically designed for the MLMD task. The input for MLMD news summarization consists of multiple mixed-language documents, which presents two main challenges: the excessive length of the input and the complex relationships between multiple documents and languages. Therefore, we propose a graph-based extractive-generative model (as shown in Figure 5) as a baseline for this task. The extract-then-generate approach addresses the issue of long input, while the graph is used to model the complex relationships between multiple documents and languages. The model consists of three main modules: Graph Construction, Extractor, and Generator. This section will provide a detailed explanation of these three modules.",
|
| 548 |
+
"bbox": [
|
| 549 |
+
505,
|
| 550 |
+
664,
|
| 551 |
+
884,
|
| 552 |
+
921
|
| 553 |
+
],
|
| 554 |
+
"page_idx": 3
|
| 555 |
+
},
|
| 556 |
+
{
|
| 557 |
+
"type": "page_number",
|
| 558 |
+
"text": "9258",
|
| 559 |
+
"bbox": [
|
| 560 |
+
480,
|
| 561 |
+
927,
|
| 562 |
+
519,
|
| 563 |
+
940
|
| 564 |
+
],
|
| 565 |
+
"page_idx": 3
|
| 566 |
+
},
|
| 567 |
+
{
|
| 568 |
+
"type": "image",
|
| 569 |
+
"img_path": "images/e76bc31392fb32cb8b12387ffdb57a8471a4a4aa17f530390b8da84e3b938206.jpg",
|
| 570 |
+
"image_caption": [
|
| 571 |
+
"Figure 5: The framework of the extract-generate model involves three main components. In the Graph Construction, mixed-language source document clusters are constructed into both homogeneous and heterogeneous graphs. The Extractor extracts key sentences from the source document cluster, while the Generator generates a summary based on the sentences extracted by the Extractor."
|
| 572 |
+
],
|
| 573 |
+
"image_footnote": [],
|
| 574 |
+
"bbox": [
|
| 575 |
+
159,
|
| 576 |
+
80,
|
| 577 |
+
843,
|
| 578 |
+
303
|
| 579 |
+
],
|
| 580 |
+
"page_idx": 4
|
| 581 |
+
},
|
| 582 |
+
{
|
| 583 |
+
"type": "text",
|
| 584 |
+
"text": "4.1 Graph Construction",
|
| 585 |
+
"text_level": 1,
|
| 586 |
+
"bbox": [
|
| 587 |
+
112,
|
| 588 |
+
395,
|
| 589 |
+
319,
|
| 590 |
+
411
|
| 591 |
+
],
|
| 592 |
+
"page_idx": 4
|
| 593 |
+
},
|
| 594 |
+
{
|
| 595 |
+
"type": "text",
|
| 596 |
+
"text": "In order to model the complex relationships between multiple documents and languages, we constructed a homogeneous graph between sentences, as well as a heterogeneous graph between sentences and words, for each input mixed-language document cluster.",
|
| 597 |
+
"bbox": [
|
| 598 |
+
112,
|
| 599 |
+
418,
|
| 600 |
+
489,
|
| 601 |
+
513
|
| 602 |
+
],
|
| 603 |
+
"page_idx": 4
|
| 604 |
+
},
|
| 605 |
+
{
|
| 606 |
+
"type": "text",
|
| 607 |
+
"text": "4.1.1 Homogeneous Graph Construction",
|
| 608 |
+
"text_level": 1,
|
| 609 |
+
"bbox": [
|
| 610 |
+
112,
|
| 611 |
+
526,
|
| 612 |
+
448,
|
| 613 |
+
542
|
| 614 |
+
],
|
| 615 |
+
"page_idx": 4
|
| 616 |
+
},
|
| 617 |
+
{
|
| 618 |
+
"type": "text",
|
| 619 |
+
"text": "Let $G_{1} = \\{V_{1},E_{1}\\}$ denote a homogeneous graph, where the node set $V_{1} = \\{s_{1},s_{2},\\ldots ,s_{n}\\}$ corresponds to the sentences within the document cluster, and the edge set $E_{1} = \\{e_{1,1},e_{1,3},\\dots ,e_{n,n}\\}$ denotes the connections between sentences that share common words. Moreover, we refer to BERTSUM (Liu and Lapata, 2019) to obtain the initial representation of nodes $H_s^{(0)}$ .",
|
| 620 |
+
"bbox": [
|
| 621 |
+
112,
|
| 622 |
+
546,
|
| 623 |
+
489,
|
| 624 |
+
677
|
| 625 |
+
],
|
| 626 |
+
"page_idx": 4
|
| 627 |
+
},
|
| 628 |
+
{
|
| 629 |
+
"type": "text",
|
| 630 |
+
"text": "4.1.2 Heterogeneous Graph Construction",
|
| 631 |
+
"text_level": 1,
|
| 632 |
+
"bbox": [
|
| 633 |
+
112,
|
| 634 |
+
690,
|
| 635 |
+
455,
|
| 636 |
+
706
|
| 637 |
+
],
|
| 638 |
+
"page_idx": 4
|
| 639 |
+
},
|
| 640 |
+
{
|
| 641 |
+
"type": "text",
|
| 642 |
+
"text": "Let $G_{2} = \\{V_{2}, E_{2}\\}$ denote a heterogeneous graph, where $V_{2}$ is the set of nodes and $E_{2}$ is the set of edges. In this graph, the nodes can be represented as $V_{2} = V_{1} \\cup V_{w}$ , where $V_{w} = \\{w_{1}, w_{2}, \\ldots, w_{k}\\}$ is the set of words. The edges, denoted as $E_{2} = \\{e_{1,1}, \\ldots, e_{1,k}, \\ldots, e_{n,1}, \\ldots, e_{n,k}\\}$ , represent the connections between the $i^{th}$ sentence and the $j^{th}$ word, with edge weights determined by TF-IDF (Aizawa, 2003). We use mBERT to initialize the static embeddings of word node as $H_{w}^{(0)}$ . To obtain the representations of sentence node, We first concatenate the tokens of each word corresponding to a sentence and then input them into mBERT to create",
|
| 643 |
+
"bbox": [
|
| 644 |
+
112,
|
| 645 |
+
709,
|
| 646 |
+
489,
|
| 647 |
+
920
|
| 648 |
+
],
|
| 649 |
+
"page_idx": 4
|
| 650 |
+
},
|
| 651 |
+
{
|
| 652 |
+
"type": "text",
|
| 653 |
+
"text": "the initial representation of the sentence node as $H_{s}$ . Next, we employ a convolutional neural network (CNN) to capture local information within the sentence. To extract sentence-level features, we apply a bidirectional long short-term memory (BiLSTM) network to capture contextual dependencies. Finally, by concatenating the outputs of the CNN and BiLSTM, we generate the representation of sentence node that encompasses both intra-sentence and inter-sentence information, denoted as $H_{s}^{(0)}$ .",
|
| 654 |
+
"bbox": [
|
| 655 |
+
507,
|
| 656 |
+
395,
|
| 657 |
+
884,
|
| 658 |
+
557
|
| 659 |
+
],
|
| 660 |
+
"page_idx": 4
|
| 661 |
+
},
|
| 662 |
+
{
|
| 663 |
+
"type": "text",
|
| 664 |
+
"text": "4.2 Extractor",
|
| 665 |
+
"text_level": 1,
|
| 666 |
+
"bbox": [
|
| 667 |
+
507,
|
| 668 |
+
569,
|
| 669 |
+
633,
|
| 670 |
+
583
|
| 671 |
+
],
|
| 672 |
+
"page_idx": 4
|
| 673 |
+
},
|
| 674 |
+
{
|
| 675 |
+
"type": "text",
|
| 676 |
+
"text": "In the extractor, we first perform sentence representation learning and then extract key sentences.",
|
| 677 |
+
"bbox": [
|
| 678 |
+
507,
|
| 679 |
+
590,
|
| 680 |
+
882,
|
| 681 |
+
621
|
| 682 |
+
],
|
| 683 |
+
"page_idx": 4
|
| 684 |
+
},
|
| 685 |
+
{
|
| 686 |
+
"type": "text",
|
| 687 |
+
"text": "4.2.1 Sentence Representation Learning",
|
| 688 |
+
"text_level": 1,
|
| 689 |
+
"bbox": [
|
| 690 |
+
507,
|
| 691 |
+
630,
|
| 692 |
+
840,
|
| 693 |
+
646
|
| 694 |
+
],
|
| 695 |
+
"page_idx": 4
|
| 696 |
+
},
|
| 697 |
+
{
|
| 698 |
+
"type": "text",
|
| 699 |
+
"text": "Before extracting key sentences, we first use GAT (Graph Attention Network) (Velicković et al., 2017) and heterogeneous GAT (Wang et al., 2020) to learn sentence representations on the homogeneous and heterogeneous graphs, respectively.",
|
| 700 |
+
"bbox": [
|
| 701 |
+
507,
|
| 702 |
+
649,
|
| 703 |
+
882,
|
| 704 |
+
730
|
| 705 |
+
],
|
| 706 |
+
"page_idx": 4
|
| 707 |
+
},
|
| 708 |
+
{
|
| 709 |
+
"type": "text",
|
| 710 |
+
"text": "In the homogeneous graph, we calculate the sentence representation $H_{s}^{(1)}$ using the following formula:",
|
| 711 |
+
"bbox": [
|
| 712 |
+
507,
|
| 713 |
+
730,
|
| 714 |
+
882,
|
| 715 |
+
778
|
| 716 |
+
],
|
| 717 |
+
"page_idx": 4
|
| 718 |
+
},
|
| 719 |
+
{
|
| 720 |
+
"type": "equation",
|
| 721 |
+
"text": "\n$$\nU _ {s \\rightarrow s} ^ {\\prime (1)} = \\operatorname {G A T} _ {s 2 s} \\left(H _ {s} ^ {\\prime (0)}, H _ {s} ^ {\\prime (0)}, H _ {s} ^ {\\prime (0)}\\right)\n$$\n",
|
| 722 |
+
"text_format": "latex",
|
| 723 |
+
"bbox": [
|
| 724 |
+
554,
|
| 725 |
+
802,
|
| 726 |
+
882,
|
| 727 |
+
829
|
| 728 |
+
],
|
| 729 |
+
"page_idx": 4
|
| 730 |
+
},
|
| 731 |
+
{
|
| 732 |
+
"type": "equation",
|
| 733 |
+
"text": "\n$$\nH _ {s} ^ {\\prime (1)} = \\mathrm {F N N} \\left(H _ {s} ^ {\\prime (0)} + U _ {s \\rightarrow s} ^ {\\prime (1)}\\right)\n$$\n",
|
| 734 |
+
"text_format": "latex",
|
| 735 |
+
"bbox": [
|
| 736 |
+
557,
|
| 737 |
+
825,
|
| 738 |
+
781,
|
| 739 |
+
845
|
| 740 |
+
],
|
| 741 |
+
"page_idx": 4
|
| 742 |
+
},
|
| 743 |
+
{
|
| 744 |
+
"type": "text",
|
| 745 |
+
"text": "In the heterogeneous graph, we learn the sentence representation $H_{s}^{(n)}$ through $n$ iterations, where the iteration process at step $t + 1$ is as follows:",
|
| 746 |
+
"bbox": [
|
| 747 |
+
507,
|
| 748 |
+
854,
|
| 749 |
+
884,
|
| 750 |
+
917
|
| 751 |
+
],
|
| 752 |
+
"page_idx": 4
|
| 753 |
+
},
|
| 754 |
+
{
|
| 755 |
+
"type": "page_number",
|
| 756 |
+
"text": "9259",
|
| 757 |
+
"bbox": [
|
| 758 |
+
480,
|
| 759 |
+
927,
|
| 760 |
+
519,
|
| 761 |
+
940
|
| 762 |
+
],
|
| 763 |
+
"page_idx": 4
|
| 764 |
+
},
|
| 765 |
+
{
|
| 766 |
+
"type": "equation",
|
| 767 |
+
"text": "\n$$\nU _ {s \\rightarrow w} ^ {(t + 1)} = \\operatorname {G A T} _ {s 2 w} \\left(H _ {w} ^ {(t)}, H _ {s} ^ {(t)}, H _ {s} ^ {(t)}\\right)\n$$\n",
|
| 768 |
+
"text_format": "latex",
|
| 769 |
+
"bbox": [
|
| 770 |
+
134,
|
| 771 |
+
103,
|
| 772 |
+
408,
|
| 773 |
+
123
|
| 774 |
+
],
|
| 775 |
+
"page_idx": 5
|
| 776 |
+
},
|
| 777 |
+
{
|
| 778 |
+
"type": "equation",
|
| 779 |
+
"text": "\n$$\nH _ {w} ^ {(t + 1)} = \\operatorname {F N N} \\left(H _ {w} ^ {(t)} + U _ {s \\rightarrow w} ^ {(t + 1)}\\right) \\tag {2}\n$$\n",
|
| 780 |
+
"text_format": "latex",
|
| 781 |
+
"bbox": [
|
| 782 |
+
134,
|
| 783 |
+
124,
|
| 784 |
+
485,
|
| 785 |
+
153
|
| 786 |
+
],
|
| 787 |
+
"page_idx": 5
|
| 788 |
+
},
|
| 789 |
+
{
|
| 790 |
+
"type": "equation",
|
| 791 |
+
"text": "\n$$\nU _ {w \\rightarrow s} ^ {(t + 1)} = \\mathrm {G A T} _ {w 2 s} \\left(H _ {s} ^ {(t)}, H _ {w} ^ {(t + 1)}, H _ {w} ^ {(t + 1)}\\right) \\tag {2}\n$$\n",
|
| 792 |
+
"text_format": "latex",
|
| 793 |
+
"bbox": [
|
| 794 |
+
137,
|
| 795 |
+
142,
|
| 796 |
+
487,
|
| 797 |
+
167
|
| 798 |
+
],
|
| 799 |
+
"page_idx": 5
|
| 800 |
+
},
|
| 801 |
+
{
|
| 802 |
+
"type": "equation",
|
| 803 |
+
"text": "\n$$\nH _ {s} ^ {(t + 1)} = \\mathrm {F N N} (H _ {s} ^ {(t)}, U _ {w \\rightarrow s} ^ {(t + 1)})\n$$\n",
|
| 804 |
+
"text_format": "latex",
|
| 805 |
+
"bbox": [
|
| 806 |
+
137,
|
| 807 |
+
168,
|
| 808 |
+
359,
|
| 809 |
+
189
|
| 810 |
+
],
|
| 811 |
+
"page_idx": 5
|
| 812 |
+
},
|
| 813 |
+
{
|
| 814 |
+
"type": "text",
|
| 815 |
+
"text": "4.2.2 Extracting Key Sentences",
|
| 816 |
+
"text_level": 1,
|
| 817 |
+
"bbox": [
|
| 818 |
+
112,
|
| 819 |
+
195,
|
| 820 |
+
374,
|
| 821 |
+
211
|
| 822 |
+
],
|
| 823 |
+
"page_idx": 5
|
| 824 |
+
},
|
| 825 |
+
{
|
| 826 |
+
"type": "text",
|
| 827 |
+
"text": "To extract key sentences, we first concatenate the sentence representations obtained from the homogeneous graph sentence representation learning, which capture inter-sentence relationships, with the sentence representations obtained from the heterogeneous graph sentence representation learning, which capture intra-sentence relationships. This results in the final sentence representation that incorporates both inter-sentence and intra-sentence relationships. Then, we use top- $K$ selection as defined in Section 4.4 to extract the indices and scores of the top- $K$ key sentences. The above process can be represented as follows:",
|
| 828 |
+
"bbox": [
|
| 829 |
+
112,
|
| 830 |
+
215,
|
| 831 |
+
489,
|
| 832 |
+
423
|
| 833 |
+
],
|
| 834 |
+
"page_idx": 5
|
| 835 |
+
},
|
| 836 |
+
{
|
| 837 |
+
"type": "equation",
|
| 838 |
+
"text": "\n$$\nH _ {s} ^ {*} = H _ {s} ^ {\\prime (1)} \\oplus H _ {s} ^ {(n)} \\tag {3}\n$$\n",
|
| 839 |
+
"text_format": "latex",
|
| 840 |
+
"bbox": [
|
| 841 |
+
186,
|
| 842 |
+
429,
|
| 843 |
+
487,
|
| 844 |
+
456
|
| 845 |
+
],
|
| 846 |
+
"page_idx": 5
|
| 847 |
+
},
|
| 848 |
+
{
|
| 849 |
+
"type": "equation",
|
| 850 |
+
"text": "\n$$\ni n d i c e s, \\text {s c o r e} = \\operatorname {t o p} - K \\left(H _ {s} ^ {*}\\right)\n$$\n",
|
| 851 |
+
"text_format": "latex",
|
| 852 |
+
"bbox": [
|
| 853 |
+
188,
|
| 854 |
+
451,
|
| 855 |
+
411,
|
| 856 |
+
469
|
| 857 |
+
],
|
| 858 |
+
"page_idx": 5
|
| 859 |
+
},
|
| 860 |
+
{
|
| 861 |
+
"type": "text",
|
| 862 |
+
"text": "Finally, we use these indices to locate key sentences, and combine them to form the key snippet $X_{key}$ .",
|
| 863 |
+
"bbox": [
|
| 864 |
+
112,
|
| 865 |
+
475,
|
| 866 |
+
489,
|
| 867 |
+
525
|
| 868 |
+
],
|
| 869 |
+
"page_idx": 5
|
| 870 |
+
},
|
| 871 |
+
{
|
| 872 |
+
"type": "text",
|
| 873 |
+
"text": "4.3 Generator",
|
| 874 |
+
"text_level": 1,
|
| 875 |
+
"bbox": [
|
| 876 |
+
112,
|
| 877 |
+
533,
|
| 878 |
+
243,
|
| 879 |
+
546
|
| 880 |
+
],
|
| 881 |
+
"page_idx": 5
|
| 882 |
+
},
|
| 883 |
+
{
|
| 884 |
+
"type": "text",
|
| 885 |
+
"text": "In the generator, we first input $x \\in X_{key}$ into mBART and obtain $h_x^t$ , which is the model's output before the final language model head. Then, $h_x^t$ is fed into the final language model head and a multilayer perceptron (MLP) to obtain a generation probability $P_{\\theta}(y_t|x,y_{< t})$ and a dynamic weight $P_{\\theta}(x|X_{key},y_{< t})$ , respectively. Here, the dynamic weight represents the probability of selecting $x$ from $X_{key}$ for summary generation. Therefore, the generation probability of the final summary $y$ is calculated by marginalizing (MARG) as follows:",
|
| 886 |
+
"bbox": [
|
| 887 |
+
112,
|
| 888 |
+
554,
|
| 889 |
+
489,
|
| 890 |
+
731
|
| 891 |
+
],
|
| 892 |
+
"page_idx": 5
|
| 893 |
+
},
|
| 894 |
+
{
|
| 895 |
+
"type": "equation",
|
| 896 |
+
"text": "\n$$\n\\begin{array}{l} P _ {\\theta} (y | x, X _ {k e y}) = \\prod_ {t = 1} ^ {T} \\sum_ {x \\in X _ {k e y}} \\left(P _ {\\theta} \\left(y _ {t} | x, y _ {< t}\\right) \\right. \\\\ P _ {\\theta} (x \\mid X _ {k e y}, y _ {< t})) \\tag {4} \\\\ \\end{array}\n$$\n",
|
| 897 |
+
"text_format": "latex",
|
| 898 |
+
"bbox": [
|
| 899 |
+
126,
|
| 900 |
+
737,
|
| 901 |
+
485,
|
| 902 |
+
818
|
| 903 |
+
],
|
| 904 |
+
"page_idx": 5
|
| 905 |
+
},
|
| 906 |
+
{
|
| 907 |
+
"type": "text",
|
| 908 |
+
"text": "4.4 Loss",
|
| 909 |
+
"text_level": 1,
|
| 910 |
+
"bbox": [
|
| 911 |
+
112,
|
| 912 |
+
828,
|
| 913 |
+
196,
|
| 914 |
+
841
|
| 915 |
+
],
|
| 916 |
+
"page_idx": 5
|
| 917 |
+
},
|
| 918 |
+
{
|
| 919 |
+
"type": "text",
|
| 920 |
+
"text": "Extractor Loss: For the extractor, we use cross-entropy to measure the loss of key sentence extraction:",
|
| 921 |
+
"bbox": [
|
| 922 |
+
112,
|
| 923 |
+
848,
|
| 924 |
+
489,
|
| 925 |
+
895
|
| 926 |
+
],
|
| 927 |
+
"page_idx": 5
|
| 928 |
+
},
|
| 929 |
+
{
|
| 930 |
+
"type": "equation",
|
| 931 |
+
"text": "\n$$\n\\mathcal {L} _ {e x t} = - (z \\log (\\hat {z}) + (1 - z) \\log (1 - \\hat {z})) \\tag {5}\n$$\n",
|
| 932 |
+
"text_format": "latex",
|
| 933 |
+
"bbox": [
|
| 934 |
+
134,
|
| 935 |
+
904,
|
| 936 |
+
487,
|
| 937 |
+
921
|
| 938 |
+
],
|
| 939 |
+
"page_idx": 5
|
| 940 |
+
},
|
| 941 |
+
{
|
| 942 |
+
"type": "text",
|
| 943 |
+
"text": "where $\\hat{z}$ is the predicted result, which can be computed using the indices from Eq.(3), and $z$ is the true label. The calculation process is as follows: First, we use mBERT to represent all the sentences in the source document clusters and the target summary. Then, we calculate the cosine similarity between these sentences and the target summary, and label the top- $K$ sentences with the highest similarity as key sentences.",
|
| 944 |
+
"bbox": [
|
| 945 |
+
507,
|
| 946 |
+
84,
|
| 947 |
+
884,
|
| 948 |
+
228
|
| 949 |
+
],
|
| 950 |
+
"page_idx": 5
|
| 951 |
+
},
|
| 952 |
+
{
|
| 953 |
+
"type": "text",
|
| 954 |
+
"text": "Generator Loss: For the generator, we use the negative log-likelihood loss (NLL) to measure the loss:",
|
| 955 |
+
"bbox": [
|
| 956 |
+
507,
|
| 957 |
+
229,
|
| 958 |
+
882,
|
| 959 |
+
274
|
| 960 |
+
],
|
| 961 |
+
"page_idx": 5
|
| 962 |
+
},
|
| 963 |
+
{
|
| 964 |
+
"type": "equation",
|
| 965 |
+
"text": "\n$$\n\\mathcal {L} _ {\\text {g e n}} = - \\log P _ {\\theta} (y | x, X _ {\\text {k e y}}) \\tag {6}\n$$\n",
|
| 966 |
+
"text_format": "latex",
|
| 967 |
+
"bbox": [
|
| 968 |
+
589,
|
| 969 |
+
277,
|
| 970 |
+
882,
|
| 971 |
+
294
|
| 972 |
+
],
|
| 973 |
+
"page_idx": 5
|
| 974 |
+
},
|
| 975 |
+
{
|
| 976 |
+
"type": "text",
|
| 977 |
+
"text": "Consistency Loss: The dynamic weight $P_{\\theta}(x|X_{key},y_{< t})$ of generator represents the probability of selecting $x$ from $X_{key}$ at the $t$ -th time step, essentially serving the same function as the extractor. Therefore, we adopt a KL divergence-based Consistency Loss proposed by Mao et al. (2021) to quantify the difference between the average dynamic weight and the extractor's predicted scores:",
|
| 978 |
+
"bbox": [
|
| 979 |
+
507,
|
| 980 |
+
300,
|
| 981 |
+
884,
|
| 982 |
+
443
|
| 983 |
+
],
|
| 984 |
+
"page_idx": 5
|
| 985 |
+
},
|
| 986 |
+
{
|
| 987 |
+
"type": "equation",
|
| 988 |
+
"text": "\n$$\n\\mathcal {L} _ {c o n} = \\mathrm {K L} \\left(\\frac {1}{T} \\sum_ {t = 1} ^ {T} \\left(P _ {\\theta} \\left(x \\mid X _ {k e y}, y _ {< t}\\right)\\right), \\right. \\tag {7}\n$$\n",
|
| 989 |
+
"text_format": "latex",
|
| 990 |
+
"bbox": [
|
| 991 |
+
552,
|
| 992 |
+
451,
|
| 993 |
+
882,
|
| 994 |
+
495
|
| 995 |
+
],
|
| 996 |
+
"page_idx": 5
|
| 997 |
+
},
|
| 998 |
+
{
|
| 999 |
+
"type": "equation",
|
| 1000 |
+
"text": "\n$$\n\\operatorname {S o f t m a x} (s c o r e))\n$$\n",
|
| 1001 |
+
"text_format": "latex",
|
| 1002 |
+
"bbox": [
|
| 1003 |
+
705,
|
| 1004 |
+
498,
|
| 1005 |
+
836,
|
| 1006 |
+
514
|
| 1007 |
+
],
|
| 1008 |
+
"page_idx": 5
|
| 1009 |
+
},
|
| 1010 |
+
{
|
| 1011 |
+
"type": "text",
|
| 1012 |
+
"text": "Total Loss: The overall model loss can be defined as follows:",
|
| 1013 |
+
"bbox": [
|
| 1014 |
+
507,
|
| 1015 |
+
525,
|
| 1016 |
+
882,
|
| 1017 |
+
555
|
| 1018 |
+
],
|
| 1019 |
+
"page_idx": 5
|
| 1020 |
+
},
|
| 1021 |
+
{
|
| 1022 |
+
"type": "equation",
|
| 1023 |
+
"text": "\n$$\n\\mathcal {L} _ {\\text {t o t a l}} = \\lambda_ {\\text {e x t}} \\mathcal {L} _ {\\text {e x t}} + \\lambda_ {\\text {g e n}} \\mathcal {L} _ {\\text {g e n}} + \\lambda_ {\\text {c o n}} \\mathcal {L} _ {\\text {c o n}} \\tag {8}\n$$\n",
|
| 1024 |
+
"text_format": "latex",
|
| 1025 |
+
"bbox": [
|
| 1026 |
+
524,
|
| 1027 |
+
568,
|
| 1028 |
+
882,
|
| 1029 |
+
586
|
| 1030 |
+
],
|
| 1031 |
+
"page_idx": 5
|
| 1032 |
+
},
|
| 1033 |
+
{
|
| 1034 |
+
"type": "text",
|
| 1035 |
+
"text": "where $\\lambda_{ext}$ , $\\lambda_{gen}$ , and $\\lambda_{con}$ are hyperparameters.",
|
| 1036 |
+
"bbox": [
|
| 1037 |
+
507,
|
| 1038 |
+
596,
|
| 1039 |
+
868,
|
| 1040 |
+
613
|
| 1041 |
+
],
|
| 1042 |
+
"page_idx": 5
|
| 1043 |
+
},
|
| 1044 |
+
{
|
| 1045 |
+
"type": "text",
|
| 1046 |
+
"text": "5 Experiments",
|
| 1047 |
+
"text_level": 1,
|
| 1048 |
+
"bbox": [
|
| 1049 |
+
507,
|
| 1050 |
+
623,
|
| 1051 |
+
655,
|
| 1052 |
+
640
|
| 1053 |
+
],
|
| 1054 |
+
"page_idx": 5
|
| 1055 |
+
},
|
| 1056 |
+
{
|
| 1057 |
+
"type": "text",
|
| 1058 |
+
"text": "In this section, we will introduce the baselines we used and present the implementation details.",
|
| 1059 |
+
"bbox": [
|
| 1060 |
+
507,
|
| 1061 |
+
649,
|
| 1062 |
+
880,
|
| 1063 |
+
680
|
| 1064 |
+
],
|
| 1065 |
+
"page_idx": 5
|
| 1066 |
+
},
|
| 1067 |
+
{
|
| 1068 |
+
"type": "text",
|
| 1069 |
+
"text": "5.1 Baselines",
|
| 1070 |
+
"text_level": 1,
|
| 1071 |
+
"bbox": [
|
| 1072 |
+
507,
|
| 1073 |
+
690,
|
| 1074 |
+
628,
|
| 1075 |
+
705
|
| 1076 |
+
],
|
| 1077 |
+
"page_idx": 5
|
| 1078 |
+
},
|
| 1079 |
+
{
|
| 1080 |
+
"type": "text",
|
| 1081 |
+
"text": "To benchmark the MLMD-news dataset, in addition to our proposed graph-based extractive-generative method, we also used the following baselines, which can be categorized into Extract-then-translate, Translate-then-extract, Abstractive models, LLM, and Extract-then-abstract.",
|
| 1082 |
+
"bbox": [
|
| 1083 |
+
507,
|
| 1084 |
+
711,
|
| 1085 |
+
884,
|
| 1086 |
+
807
|
| 1087 |
+
],
|
| 1088 |
+
"page_idx": 5
|
| 1089 |
+
},
|
| 1090 |
+
{
|
| 1091 |
+
"type": "text",
|
| 1092 |
+
"text": "Extract-then-translate: First, summaries are extracted from the source document cluster using classic extractive models such as Centroid (Radev et al., 2004), LexRank (Erkan and Radev, 2004), MMR (Carbonell and Goldstein, 1998), and TextRank (Mihalcea and Tarau, 2004), and then translated into the target language.",
|
| 1093 |
+
"bbox": [
|
| 1094 |
+
507,
|
| 1095 |
+
809,
|
| 1096 |
+
884,
|
| 1097 |
+
921
|
| 1098 |
+
],
|
| 1099 |
+
"page_idx": 5
|
| 1100 |
+
},
|
| 1101 |
+
{
|
| 1102 |
+
"type": "page_number",
|
| 1103 |
+
"text": "9260",
|
| 1104 |
+
"bbox": [
|
| 1105 |
+
480,
|
| 1106 |
+
927,
|
| 1107 |
+
521,
|
| 1108 |
+
940
|
| 1109 |
+
],
|
| 1110 |
+
"page_idx": 5
|
| 1111 |
+
},
|
| 1112 |
+
{
|
| 1113 |
+
"type": "text",
|
| 1114 |
+
"text": "Translate-then-extract: First, the documents in the source document cluster are translated into the target language, and then summaries are extracted using classic extractive models such as Centroid, LexRank, MMR, and TextRank.",
|
| 1115 |
+
"bbox": [
|
| 1116 |
+
112,
|
| 1117 |
+
84,
|
| 1118 |
+
487,
|
| 1119 |
+
162
|
| 1120 |
+
],
|
| 1121 |
+
"page_idx": 6
|
| 1122 |
+
},
|
| 1123 |
+
{
|
| 1124 |
+
"type": "text",
|
| 1125 |
+
"text": "Translate-then-MDS: First, the documents in the source document cluster are translated into the target language, and then summaries are generated using SLMD models such as PRIMERA(Xiao et al., 2022), PEGASUS(Zhang et al., 2019).",
|
| 1126 |
+
"bbox": [
|
| 1127 |
+
112,
|
| 1128 |
+
165,
|
| 1129 |
+
487,
|
| 1130 |
+
244
|
| 1131 |
+
],
|
| 1132 |
+
"page_idx": 6
|
| 1133 |
+
},
|
| 1134 |
+
{
|
| 1135 |
+
"type": "text",
|
| 1136 |
+
"text": "Abstractive models: Use mT5 (Xue, 2020) and mBART (Tang et al., 2020), which have multi-language understanding and generation capabilities, to directly generate summaries from the source document cluster. If the input exceeds the model's capacity, the excess parts will be truncated.",
|
| 1137 |
+
"bbox": [
|
| 1138 |
+
112,
|
| 1139 |
+
246,
|
| 1140 |
+
487,
|
| 1141 |
+
341
|
| 1142 |
+
],
|
| 1143 |
+
"page_idx": 6
|
| 1144 |
+
},
|
| 1145 |
+
{
|
| 1146 |
+
"type": "text",
|
| 1147 |
+
"text": "LLM: Use models such as GPT-3.5-turbo-16k³, GPT-4.0-turbo-32k³, Llama-3.1-8B-Instruct(Dubey et al., 2024), Gemini-1.5-pro⁴, and Claude-2.1⁵, which have multi-language and long input capabilities, to directly generate summaries from the source document cluster.",
|
| 1148 |
+
"bbox": [
|
| 1149 |
+
112,
|
| 1150 |
+
342,
|
| 1151 |
+
487,
|
| 1152 |
+
437
|
| 1153 |
+
],
|
| 1154 |
+
"page_idx": 6
|
| 1155 |
+
},
|
| 1156 |
+
{
|
| 1157 |
+
"type": "text",
|
| 1158 |
+
"text": "Extract-then-abstract: First, use classic extractive models such as Centroid and TextRank to extract summaries from the source document cluster, and then generate the target summary using generative models like mT5 and mBART.",
|
| 1159 |
+
"bbox": [
|
| 1160 |
+
112,
|
| 1161 |
+
439,
|
| 1162 |
+
487,
|
| 1163 |
+
518
|
| 1164 |
+
],
|
| 1165 |
+
"page_idx": 6
|
| 1166 |
+
},
|
| 1167 |
+
{
|
| 1168 |
+
"type": "text",
|
| 1169 |
+
"text": "5.2 Implementation Details",
|
| 1170 |
+
"text_level": 1,
|
| 1171 |
+
"bbox": [
|
| 1172 |
+
112,
|
| 1173 |
+
532,
|
| 1174 |
+
344,
|
| 1175 |
+
546
|
| 1176 |
+
],
|
| 1177 |
+
"page_idx": 6
|
| 1178 |
+
},
|
| 1179 |
+
{
|
| 1180 |
+
"type": "text",
|
| 1181 |
+
"text": "In constructing the MLMD-news, the ROUGE-1 thresholds for French, German, and Spanish were set to 88.03, 87.05, and 89.25, respectively, based on the average ROUGE-1 scores for various language news. For the Graph-based Extract-Generate model, we set $\\lambda_{\\mathrm{ext}} = 1$ , $\\lambda_{\\mathrm{gen}} = 0.1$ , and $\\lambda_{\\mathrm{con}} = 0.0001$ . The extractor's learning rate was set to $5 \\times 10^{-6}$ , while the generator's learning rate was $5 \\times 10^{-5}$ . The batch size was 8, and top- $K$ was set to 10. The ROUGE is calculated by pyrouge<sup>6</sup>. All experiments were conducted on NVIDIA L20 GPUs. In addition, the total number of parameters in Graph-based Extract-Generate model is about 800M.",
|
| 1182 |
+
"bbox": [
|
| 1183 |
+
112,
|
| 1184 |
+
552,
|
| 1185 |
+
487,
|
| 1186 |
+
776
|
| 1187 |
+
],
|
| 1188 |
+
"page_idx": 6
|
| 1189 |
+
},
|
| 1190 |
+
{
|
| 1191 |
+
"type": "text",
|
| 1192 |
+
"text": "5.3 Benchmark Experiments",
|
| 1193 |
+
"text_level": 1,
|
| 1194 |
+
"bbox": [
|
| 1195 |
+
112,
|
| 1196 |
+
789,
|
| 1197 |
+
356,
|
| 1198 |
+
804
|
| 1199 |
+
],
|
| 1200 |
+
"page_idx": 6
|
| 1201 |
+
},
|
| 1202 |
+
{
|
| 1203 |
+
"type": "text",
|
| 1204 |
+
"text": "In Table 2, we present the ROUGE scores for different methods on the MLMD-news dataset, and the following observations can be made:",
|
| 1205 |
+
"bbox": [
|
| 1206 |
+
112,
|
| 1207 |
+
810,
|
| 1208 |
+
487,
|
| 1209 |
+
858
|
| 1210 |
+
],
|
| 1211 |
+
"page_idx": 6
|
| 1212 |
+
},
|
| 1213 |
+
{
|
| 1214 |
+
"type": "table",
|
| 1215 |
+
"img_path": "images/55626c69d61e1510fc87bcc7a653de5f04a372c3ec4cae573bf5adcbba5dad0b.jpg",
|
| 1216 |
+
"table_caption": [],
|
| 1217 |
+
"table_footnote": [],
|
| 1218 |
+
"table_body": "<table><tr><td></td><td>R-1</td><td>R-2</td><td>R-L</td></tr><tr><td colspan=\"4\">Extract-then-translate</td></tr><tr><td>Centroid</td><td>27.90</td><td>6.92</td><td>23.35</td></tr><tr><td>LexRank</td><td>28.61</td><td>7.30</td><td>24.27</td></tr><tr><td>MMR</td><td>24.07</td><td>5.61</td><td>20.23</td></tr><tr><td>TextRank</td><td>28.66</td><td>7.28</td><td>24.13</td></tr><tr><td colspan=\"4\">Translate-then-extract</td></tr><tr><td>Centroid</td><td>29.16</td><td>7.64</td><td>23.60</td></tr><tr><td>LexRank</td><td>31.12</td><td>8.53</td><td>25.70</td></tr><tr><td>MMR</td><td>25.58</td><td>6.11</td><td>20.93</td></tr><tr><td>TextRank</td><td>30.18</td><td>8.04</td><td>24.55</td></tr><tr><td colspan=\"4\">Translate-then-MDS</td></tr><tr><td>PEGASUS</td><td>35.54</td><td>9.44</td><td>29.95</td></tr><tr><td>PRIMERA</td><td>36.83</td><td>10.79</td><td>32.12</td></tr><tr><td colspan=\"4\">Abstractive models</td></tr><tr><td>mBART(1024)</td><td>36.84</td><td>8.13</td><td>32.22</td></tr><tr><td>mT5(1024)</td><td>33.21</td><td>6.26</td><td>27.43</td></tr><tr><td colspan=\"4\">LLM</td></tr><tr><td>GPT-3.5-turbo-16k</td><td>34.36</td><td>8.88</td><td>30.74</td></tr><tr><td>GPT-4.0-turbo-32k</td><td>39.02</td><td>10.45</td><td>34.68</td></tr><tr><td>Llama-3.1-8B-Instruct</td><td>36.61</td><td>10.44</td><td>33.31</td></tr><tr><td>Gemini-1.5-pro</td><td>40.79</td><td>12.05</td><td>36.59</td></tr><tr><td>Claude-2.1</td><td>40.51</td><td>11.67</td><td>36.16</td></tr><tr><td colspan=\"4\">Extract-then-abstarct</td></tr><tr><td>TextRank-then-mBART</td><td>32.00</td><td>5.84</td><td>28.00</td></tr><tr><td>Centroid-then-mBART</td><td>32.76</td><td>5.70</td><td>28.71</td></tr><tr><td>TextRank-then-mT5</td><td>31.63</td><td>5.22</td><td>26.46</td></tr><tr><td>Centroid-then-mT5</td><td>31.39</td><td>5.21</td><td>26.25</td></tr><tr><td>Our</td><td>39.16</td><td>9.64</td><td>34.02</td></tr></table>",
|
| 1219 |
+
"bbox": [
|
| 1220 |
+
517,
|
| 1221 |
+
80,
|
| 1222 |
+
875,
|
| 1223 |
+
500
|
| 1224 |
+
],
|
| 1225 |
+
"page_idx": 6
|
| 1226 |
+
},
|
| 1227 |
+
{
|
| 1228 |
+
"type": "text",
|
| 1229 |
+
"text": "Table 2: The benchmark experimental results on the MLMD-news dataset.",
|
| 1230 |
+
"bbox": [
|
| 1231 |
+
507,
|
| 1232 |
+
508,
|
| 1233 |
+
880,
|
| 1234 |
+
537
|
| 1235 |
+
],
|
| 1236 |
+
"page_idx": 6
|
| 1237 |
+
},
|
| 1238 |
+
{
|
| 1239 |
+
"type": "list",
|
| 1240 |
+
"sub_type": "text",
|
| 1241 |
+
"list_items": [
|
| 1242 |
+
"- The ROUGE scores for the Extract-then-translate methods are quite low, which can be attributed to the limited support of classic extractive methods for mixed languages and the translation of extracted sentences.",
|
| 1243 |
+
"- The ROUGE scores for the Translate-then-extract methods are higher than those for the Extract-then-translate methods, possibly because these classic extractive methods perform better with single-language input.",
|
| 1244 |
+
"- TThe ROUGE score of the Translate-then-MDS method is significantly higher than that of the Translate-then-extract methods. This result is evident because the MDS method has been improved for multi-document processing, while traditional extraction methods have not.",
|
| 1245 |
+
"- Abstractive models show a significant advantage in ROUGE scores compared to the Extract-then-translate and Translate-then"
|
| 1246 |
+
],
|
| 1247 |
+
"bbox": [
|
| 1248 |
+
531,
|
| 1249 |
+
564,
|
| 1250 |
+
880,
|
| 1251 |
+
917
|
| 1252 |
+
],
|
| 1253 |
+
"page_idx": 6
|
| 1254 |
+
},
|
| 1255 |
+
{
|
| 1256 |
+
"type": "page_footnote",
|
| 1257 |
+
"text": "3https://openai.com/.",
|
| 1258 |
+
"bbox": [
|
| 1259 |
+
136,
|
| 1260 |
+
866,
|
| 1261 |
+
292,
|
| 1262 |
+
879
|
| 1263 |
+
],
|
| 1264 |
+
"page_idx": 6
|
| 1265 |
+
},
|
| 1266 |
+
{
|
| 1267 |
+
"type": "page_footnote",
|
| 1268 |
+
"text": "<sup>4</sup>https://gemini.google.com/",
|
| 1269 |
+
"bbox": [
|
| 1270 |
+
136,
|
| 1271 |
+
881,
|
| 1272 |
+
341,
|
| 1273 |
+
894
|
| 1274 |
+
],
|
| 1275 |
+
"page_idx": 6
|
| 1276 |
+
},
|
| 1277 |
+
{
|
| 1278 |
+
"type": "page_footnote",
|
| 1279 |
+
"text": "<sup>5</sup>https://claude.ai/",
|
| 1280 |
+
"bbox": [
|
| 1281 |
+
136,
|
| 1282 |
+
894,
|
| 1283 |
+
282,
|
| 1284 |
+
906
|
| 1285 |
+
],
|
| 1286 |
+
"page_idx": 6
|
| 1287 |
+
},
|
| 1288 |
+
{
|
| 1289 |
+
"type": "page_footnote",
|
| 1290 |
+
"text": "<sup>6</sup>https://github.com/andersjo/pyrouge",
|
| 1291 |
+
"bbox": [
|
| 1292 |
+
136,
|
| 1293 |
+
906,
|
| 1294 |
+
410,
|
| 1295 |
+
920
|
| 1296 |
+
],
|
| 1297 |
+
"page_idx": 6
|
| 1298 |
+
},
|
| 1299 |
+
{
|
| 1300 |
+
"type": "page_number",
|
| 1301 |
+
"text": "9261",
|
| 1302 |
+
"bbox": [
|
| 1303 |
+
480,
|
| 1304 |
+
928,
|
| 1305 |
+
517,
|
| 1306 |
+
940
|
| 1307 |
+
],
|
| 1308 |
+
"page_idx": 6
|
| 1309 |
+
},
|
| 1310 |
+
{
|
| 1311 |
+
"type": "image",
|
| 1312 |
+
"img_path": "images/0e4ad7f7e386e0c078d550d3ff5bd57c879322792a49f60b5e01726560b01f35.jpg",
|
| 1313 |
+
"image_caption": [
|
| 1314 |
+
"Figure 6: Parameter Sensitivity of top- $K$ on the ROUGE score."
|
| 1315 |
+
],
|
| 1316 |
+
"image_footnote": [],
|
| 1317 |
+
"bbox": [
|
| 1318 |
+
196,
|
| 1319 |
+
82,
|
| 1320 |
+
803,
|
| 1321 |
+
186
|
| 1322 |
+
],
|
| 1323 |
+
"page_idx": 7
|
| 1324 |
+
},
|
| 1325 |
+
{
|
| 1326 |
+
"type": "text",
|
| 1327 |
+
"text": "extract methods, possibly because they possess strong multi-language understanding capabilities. Additionally, the performance of Abstractive models is similar to Translatethen-MDS, indicating that having multidocument processing capability and multilingual understanding are equally important in MLMD.",
|
| 1328 |
+
"bbox": [
|
| 1329 |
+
149,
|
| 1330 |
+
235,
|
| 1331 |
+
489,
|
| 1332 |
+
362
|
| 1333 |
+
],
|
| 1334 |
+
"page_idx": 7
|
| 1335 |
+
},
|
| 1336 |
+
{
|
| 1337 |
+
"type": "list",
|
| 1338 |
+
"sub_type": "text",
|
| 1339 |
+
"list_items": [
|
| 1340 |
+
"- The best and second-best results are achieved by LLMs, mainly due to their strong multi-language understanding and generation capabilities, as well as their ability to accept very long input documents.",
|
| 1341 |
+
"- Aside from our proposed method, other Extract-then-abstract methods have lower ROUGE scores compared to Abstractive models. This suggests that inappropriate extraction may not only fail to enhance summarization performance but could also lead to poorer final results due to the loss of important information.",
|
| 1342 |
+
"- The results indicate that our method addresses the above issues of other Extract-then-abstract methods and achieves performance close to that of LLMs, demonstrating the effectiveness of our method."
|
| 1343 |
+
],
|
| 1344 |
+
"bbox": [
|
| 1345 |
+
136,
|
| 1346 |
+
374,
|
| 1347 |
+
489,
|
| 1348 |
+
681
|
| 1349 |
+
],
|
| 1350 |
+
"page_idx": 7
|
| 1351 |
+
},
|
| 1352 |
+
{
|
| 1353 |
+
"type": "text",
|
| 1354 |
+
"text": "5.4 Ablation Study",
|
| 1355 |
+
"text_level": 1,
|
| 1356 |
+
"bbox": [
|
| 1357 |
+
112,
|
| 1358 |
+
690,
|
| 1359 |
+
280,
|
| 1360 |
+
706
|
| 1361 |
+
],
|
| 1362 |
+
"page_idx": 7
|
| 1363 |
+
},
|
| 1364 |
+
{
|
| 1365 |
+
"type": "text",
|
| 1366 |
+
"text": "In Table 3, we present the impact of different modules of our model on the MLMD-news dataset, including the extractor module, generator module, and consistency loss. When the extractor module is removed (i.e.w/o extractor), our method degenerates to mBART, resulting in decreases of 2.32 points, 1.51 points, and 1.8 points in ROUGE-1, ROUGE-2, and ROUGE-L, respectively. This indicates that extracting key sentences significantly impacts the overall quality of the summary. When the generator module is removed (i.e.w/o generator), the extracted sentences are multi-language, and using machine translation to convert them into",
|
| 1367 |
+
"bbox": [
|
| 1368 |
+
112,
|
| 1369 |
+
712,
|
| 1370 |
+
489,
|
| 1371 |
+
920
|
| 1372 |
+
],
|
| 1373 |
+
"page_idx": 7
|
| 1374 |
+
},
|
| 1375 |
+
{
|
| 1376 |
+
"type": "table",
|
| 1377 |
+
"img_path": "images/20128259c5ba9b30b1487811e77872dbda5363ca15c1fda05905d7f98d8c5739.jpg",
|
| 1378 |
+
"table_caption": [],
|
| 1379 |
+
"table_footnote": [],
|
| 1380 |
+
"table_body": "<table><tr><td></td><td>R-1</td><td>R-2</td><td>R-L</td></tr><tr><td>Our</td><td>39.16</td><td>9.64</td><td>34.02</td></tr><tr><td>w/o extractor</td><td>36.84</td><td>8.13</td><td>32.22</td></tr><tr><td>w/o generator</td><td>34.17</td><td>8.45</td><td>31.04</td></tr><tr><td>w/o consistency</td><td>38.81</td><td>9.41</td><td>33.85</td></tr></table>",
|
| 1381 |
+
"bbox": [
|
| 1382 |
+
547,
|
| 1383 |
+
231,
|
| 1384 |
+
847,
|
| 1385 |
+
309
|
| 1386 |
+
],
|
| 1387 |
+
"page_idx": 7
|
| 1388 |
+
},
|
| 1389 |
+
{
|
| 1390 |
+
"type": "text",
|
| 1391 |
+
"text": "Table 3: Ablation Study.",
|
| 1392 |
+
"bbox": [
|
| 1393 |
+
610,
|
| 1394 |
+
317,
|
| 1395 |
+
779,
|
| 1396 |
+
332
|
| 1397 |
+
],
|
| 1398 |
+
"page_idx": 7
|
| 1399 |
+
},
|
| 1400 |
+
{
|
| 1401 |
+
"type": "text",
|
| 1402 |
+
"text": "a summary, resulting in decreases of 4.99 points, 1.19 points, and 2.98 points in ROUGE-1, ROUGE-2, and ROUGE-L, respectively. This indicates that the generator plays a crucial role in the overall quality of the summary. Finally, when the consistency loss module is removed (i.e.w/o consistency), ROUGE-1, ROUGE-2, and ROUGE-L decrease by 0.35 points, 0.23 points, and 0.17 points, respectively. This suggests that consistency loss helps optimize the extraction quality of the extractor.",
|
| 1403 |
+
"bbox": [
|
| 1404 |
+
507,
|
| 1405 |
+
359,
|
| 1406 |
+
885,
|
| 1407 |
+
521
|
| 1408 |
+
],
|
| 1409 |
+
"page_idx": 7
|
| 1410 |
+
},
|
| 1411 |
+
{
|
| 1412 |
+
"type": "text",
|
| 1413 |
+
"text": "5.5 Parameter sensitivity",
|
| 1414 |
+
"text_level": 1,
|
| 1415 |
+
"bbox": [
|
| 1416 |
+
507,
|
| 1417 |
+
533,
|
| 1418 |
+
722,
|
| 1419 |
+
550
|
| 1420 |
+
],
|
| 1421 |
+
"page_idx": 7
|
| 1422 |
+
},
|
| 1423 |
+
{
|
| 1424 |
+
"type": "text",
|
| 1425 |
+
"text": "We also explored the impact of extracting different numbers of key sentences (i.e., different $K$ of top- $K$ ) on model performance in the MLMD-news dataset. As shown in Figure 6, the ROUGE score increases with the increase in $K$ , reaching a peak at $K = 10$ . However, when $K$ exceeds 10, the ROUGE score begins to decline, possibly due to the introduction of noise information from including too many sentences.",
|
| 1426 |
+
"bbox": [
|
| 1427 |
+
507,
|
| 1428 |
+
556,
|
| 1429 |
+
885,
|
| 1430 |
+
702
|
| 1431 |
+
],
|
| 1432 |
+
"page_idx": 7
|
| 1433 |
+
},
|
| 1434 |
+
{
|
| 1435 |
+
"type": "text",
|
| 1436 |
+
"text": "6 Conclusion",
|
| 1437 |
+
"text_level": 1,
|
| 1438 |
+
"bbox": [
|
| 1439 |
+
507,
|
| 1440 |
+
715,
|
| 1441 |
+
640,
|
| 1442 |
+
732
|
| 1443 |
+
],
|
| 1444 |
+
"page_idx": 7
|
| 1445 |
+
},
|
| 1446 |
+
{
|
| 1447 |
+
"type": "text",
|
| 1448 |
+
"text": "In this paper, we constructed the first mixed-language multi-document news summarization dataset (MLMD-news) and proposed a graph-based extract-generate model specifically designed for the MLMD news summarization task. We conducted benchmark tests on the MLMD-news dataset, evaluating our proposed method along with advanced methods such as LLM. Additionally, we have publicly released the dataset and code, hoping to foster further development in the MLMD news summarization area.",
|
| 1449 |
+
"bbox": [
|
| 1450 |
+
507,
|
| 1451 |
+
744,
|
| 1452 |
+
885,
|
| 1453 |
+
920
|
| 1454 |
+
],
|
| 1455 |
+
"page_idx": 7
|
| 1456 |
+
},
|
| 1457 |
+
{
|
| 1458 |
+
"type": "page_number",
|
| 1459 |
+
"text": "9262",
|
| 1460 |
+
"bbox": [
|
| 1461 |
+
480,
|
| 1462 |
+
928,
|
| 1463 |
+
521,
|
| 1464 |
+
940
|
| 1465 |
+
],
|
| 1466 |
+
"page_idx": 7
|
| 1467 |
+
},
|
| 1468 |
+
{
|
| 1469 |
+
"type": "text",
|
| 1470 |
+
"text": "Limitations",
|
| 1471 |
+
"text_level": 1,
|
| 1472 |
+
"bbox": [
|
| 1473 |
+
114,
|
| 1474 |
+
84,
|
| 1475 |
+
220,
|
| 1476 |
+
99
|
| 1477 |
+
],
|
| 1478 |
+
"page_idx": 8
|
| 1479 |
+
},
|
| 1480 |
+
{
|
| 1481 |
+
"type": "text",
|
| 1482 |
+
"text": "Although our method demonstrates significant performance advantages in the mixed-language multi-document summarization task, due to the limitations of GPU performance, we set the maximum number of sentence extractions (top- $K$ ) in our experiments to 13. Increasing the top- $K$ value further might improve the model's optimal performance, but this hypothesis has yet to be validated. Moreover, the mixed-language dataset we constructed currently primarily includes rich-resource languages such as German and English, with a limited number of languages involved. Future work could extend to more languages, especially low-resource ones, to further verify the method's applicability and generalization capability.",
|
| 1483 |
+
"bbox": [
|
| 1484 |
+
112,
|
| 1485 |
+
109,
|
| 1486 |
+
492,
|
| 1487 |
+
351
|
| 1488 |
+
],
|
| 1489 |
+
"page_idx": 8
|
| 1490 |
+
},
|
| 1491 |
+
{
|
| 1492 |
+
"type": "text",
|
| 1493 |
+
"text": "Ethical Considerations",
|
| 1494 |
+
"text_level": 1,
|
| 1495 |
+
"bbox": [
|
| 1496 |
+
114,
|
| 1497 |
+
363,
|
| 1498 |
+
315,
|
| 1499 |
+
378
|
| 1500 |
+
],
|
| 1501 |
+
"page_idx": 8
|
| 1502 |
+
},
|
| 1503 |
+
{
|
| 1504 |
+
"type": "text",
|
| 1505 |
+
"text": "Our MLMD-news dataset is built on the publicly available multi-document summarization dataset Multi-News, through translation and filtering processes. During the construction of the dataset, we strictly adhered to academic ethical guidelines, respected data privacy and related rights, and ensured that the use of the data complied with ethical standards. At the same time, we implemented rigorous procedures and standards to guarantee the transparency and reliability of data processing, thus supporting credible research outcomes.",
|
| 1506 |
+
"bbox": [
|
| 1507 |
+
112,
|
| 1508 |
+
388,
|
| 1509 |
+
489,
|
| 1510 |
+
565
|
| 1511 |
+
],
|
| 1512 |
+
"page_idx": 8
|
| 1513 |
+
},
|
| 1514 |
+
{
|
| 1515 |
+
"type": "text",
|
| 1516 |
+
"text": "References",
|
| 1517 |
+
"text_level": 1,
|
| 1518 |
+
"bbox": [
|
| 1519 |
+
114,
|
| 1520 |
+
592,
|
| 1521 |
+
213,
|
| 1522 |
+
607
|
| 1523 |
+
],
|
| 1524 |
+
"page_idx": 8
|
| 1525 |
+
},
|
| 1526 |
+
{
|
| 1527 |
+
"type": "list",
|
| 1528 |
+
"sub_type": "ref_text",
|
| 1529 |
+
"list_items": [
|
| 1530 |
+
"Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. 2023. Gpt-4 technical report. arXiv preprint arXiv:2303.08774.",
|
| 1531 |
+
"Akiko Aizawa. 2003. An information-theoretic perspective of tfidf measures. Information Processing & Management, 39(1):45-65.",
|
| 1532 |
+
"Florian Boudin, Stéphane Huet, and Juan-Manuel Torres-Moreno. 2011. A graph-based approach to cross-language multi-document summarization. Polibits, (43):113-118.",
|
| 1533 |
+
"Yuang Cai and Yuyu Yuan. 2024. Car-transformer: Cross-attention reinforcement transformer for crosslingual summarization. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 38, pages 17718-17726.",
|
| 1534 |
+
"Jaime Carbonell and Jade Goldstein. 1998. The use of mmr, diversity-based reranking for reordering documents and producing summaries. In Proceedings"
|
| 1535 |
+
],
|
| 1536 |
+
"bbox": [
|
| 1537 |
+
115,
|
| 1538 |
+
614,
|
| 1539 |
+
489,
|
| 1540 |
+
920
|
| 1541 |
+
],
|
| 1542 |
+
"page_idx": 8
|
| 1543 |
+
},
|
| 1544 |
+
{
|
| 1545 |
+
"type": "list",
|
| 1546 |
+
"sub_type": "ref_text",
|
| 1547 |
+
"list_items": [
|
| 1548 |
+
"of the 21st annual international ACM SIGIR conference on Research and development in information retrieval, pages 335-336.",
|
| 1549 |
+
"Asli Celikyilmaz and Dilek Hakkani-Tur. 2010. A hybrid hierarchical model for multi-document summarization. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 815-824.",
|
| 1550 |
+
"Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Amy Yang, Angela Fan, et al. 2024. The llama 3 herd of models. arXiv preprint arXiv:2407.21783.",
|
| 1551 |
+
"Günes Erkan and Dragomir R Radev. 2004. Lexrank: Graph-based lexical centrality as salience in text summarization. Journal of artificial intelligence research, 22:457-479.",
|
| 1552 |
+
"Alexander R Fabbri, Irene Li, Tianwei She, Suyi Li, and Dragomir R Radev. 2019. Multi-news: A large-scale multi-document summarization dataset and abstractive hierarchical model. arXiv preprint arXiv:1906.01749.",
|
| 1553 |
+
"Luciano Floridi and Massimo Chiriatti. 2020. Gpt-3: Its nature, scope, limits, and consequences. *Minds and Machines*, 30:681–694.",
|
| 1554 |
+
"Giacomo Frisoni, Paolo Italiani, Stefano Salvatori, and Gianluca Moro. 2023. Cogito ergo summ: abstractive summarization of biomedical papers via semantic parsing graphs and consistency rewards. In Proceedings of the AAAI conference on artificial intelligence, volume 37, pages 12781-12789.",
|
| 1555 |
+
"Alireza Ghadimi and Hamid Beigy. 2022. Hybrid multi-document summarization using pre-trained language models. Expert Systems with Applications, 192:116292.",
|
| 1556 |
+
"George Giannakopoulos. 2013. Multi-document multilingual summarization and evaluation tracks in acl 2013 multiling workshop. In Proceedings of the multiling 2013 workshop on multilingual multi-document summarization, pages 20-28.",
|
| 1557 |
+
"Aria Haghighi and Lucy Vanderwende. 2009. Exploring content models for multi-document summarization. In Proceedings of human language technologies: The 2009 annual conference of the North American Chapter of the Association for Computational Linguistics, pages 362-370.",
|
| 1558 |
+
"Hanqi Jin and Xiaojun Wan. 2020. Abstractive multi-document summarization via joint learning with single-document summarization. In *Findings of the Association for Computational Linguistics: EMNLP* 2020, pages 2545-2554.",
|
| 1559 |
+
"Akanksha Joshi, Eduardo Fidalgo, Enrique Alegre, and Laura Fernández-Robles. 2023. Deepsumm: Exploiting topic models and sequence to sequence networks for extractive text summarization. Expert Systems with Applications, 211:118442."
|
| 1560 |
+
],
|
| 1561 |
+
"bbox": [
|
| 1562 |
+
510,
|
| 1563 |
+
85,
|
| 1564 |
+
884,
|
| 1565 |
+
920
|
| 1566 |
+
],
|
| 1567 |
+
"page_idx": 8
|
| 1568 |
+
},
|
| 1569 |
+
{
|
| 1570 |
+
"type": "page_number",
|
| 1571 |
+
"text": "9263",
|
| 1572 |
+
"bbox": [
|
| 1573 |
+
480,
|
| 1574 |
+
928,
|
| 1575 |
+
519,
|
| 1576 |
+
940
|
| 1577 |
+
],
|
| 1578 |
+
"page_idx": 8
|
| 1579 |
+
},
|
| 1580 |
+
{
|
| 1581 |
+
"type": "list",
|
| 1582 |
+
"sub_type": "ref_text",
|
| 1583 |
+
"list_items": [
|
| 1584 |
+
"Thang Le. 2024. Cross-lingual summarization with pseudo-label regularization. In *Findings of the Association for Computational Linguistics: NAACL* 2024, pages 4644–4677.",
|
| 1585 |
+
"Elvys Linhares Pontes, Stéphane Huet, Juan-Manuel Torres-Moreno, and Andréa Carneiro Linhares. 2018. Cross-language text summarization using sentence and multi-sentence compression. In *Natural Language Processing and Information Systems: 23rd International Conference on Applications of Natural Language to Information Systems*, NLDB 2018, Paris, France, June 13-15, 2018, Proceedings 23, pages 467-479. Springer.",
|
| 1586 |
+
"Marina Litvak and Mark Last. 2008. Graph-based keyword extraction for single-document summarization. In *Coling* 2008: Proceedings of the workshop multisource multilingual information extraction and summarization, pages 17-24.",
|
| 1587 |
+
"Shuaiqi Liu, Jiannong Cao, Ruosong Yang, and Zhiyuan Wen. 2021. Highlight-transformer: Leveraging key phrase aware attention to improve abstractive multi-document summarization. In *Findings of the Association for Computational Linguistics: ACL-IJCNLP* 2021, pages 5021-5027.",
|
| 1588 |
+
"Yang Liu and Mirella Lapata. 2019. Text summarization with pretrained encoders. arXiv preprint arXiv:1908.08345.",
|
| 1589 |
+
"Ziming Mao, Chen Henry Wu, Ansong Ni, Yusen Zhang, Rui Zhang, Tao Yu, Budhaditya Deb, Chenguang Zhu, Ahmed H Awadallah, and Dragomir Radev. 2021. Dyle: Dynamic latent extraction for abstractive long-input summarization. arXiv preprint arXiv:2110.08168.",
|
| 1590 |
+
"Laura Mascarell, Rabin Chalumattu, and Julien Heitmann. 2023. Entropy-based sampling for abstractive multi-document summarization in low-resource settings. In 16th International Natural Language Generation Conference (INGL 2023).",
|
| 1591 |
+
"Jian-Ping Mei and Lihui Chen. 2012. Sumcr: A new subtopic-based extractive approach for text summarization. Knowledge and information systems, 31:527-545.",
|
| 1592 |
+
"Rada Mihalcea and Paul Tarau. 2004. Textrank: Bringing order into text. In Proceedings of the 2004 conference on empirical methods in natural language processing, pages 404-411.",
|
| 1593 |
+
"Paul Over and James Yen. 2004. An introduction to duc-2004. National Institute of Standards and Technology.",
|
| 1594 |
+
"Dragomir R Radev, Hongyan Jing, Malgorzata Stys, and Daniel Tam. 2004. Centroid-based summarization of multiple documents. Information Processing & Management, 40(6):919-938."
|
| 1595 |
+
],
|
| 1596 |
+
"bbox": [
|
| 1597 |
+
115,
|
| 1598 |
+
85,
|
| 1599 |
+
489,
|
| 1600 |
+
919
|
| 1601 |
+
],
|
| 1602 |
+
"page_idx": 9
|
| 1603 |
+
},
|
| 1604 |
+
{
|
| 1605 |
+
"type": "list",
|
| 1606 |
+
"sub_type": "ref_text",
|
| 1607 |
+
"list_items": [
|
| 1608 |
+
"Yun-Zhu Song, Yi-Syuan Chen, and Hong-Han Shuai. 2022. Improving multi-document summarization through referenced flexible extraction with credit-awareness. arXiv preprint arXiv:2205.01889.",
|
| 1609 |
+
"Krysta Svore, Lucy Vanderwende, and Christopher Burges. 2007. Enhancing single-document summarization by combining ranknet and third-party sources. In Proceedings of the 2007 joint conference on empirical methods in natural language processing and computational natural language learning (EMNLP-CoNLL), pages 448-457.",
|
| 1610 |
+
"Yuqing Tang, Chau Tran, Xian Li, Peng-Jen Chen, Naman Goyal, Vishrav Chaudhary, Jiatao Gu, and Angela Fan. 2020. Multilingual translation with extensible multilingual pretraining and finetuning. arXiv preprint arXiv:2008.00401.",
|
| 1611 |
+
"Petar Velicković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. 2017. Graph attention networks. arXiv preprint arXiv:1710.10903.",
|
| 1612 |
+
"Xiaojun Wan. 2011. Using bilingual information for cross-language document summarization. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 1546-1555.",
|
| 1613 |
+
"Xiaojun Wan, Ziqiang Cao, Furu Wei, Sujian Li, and Ming Zhou. 2015. Multi-document summarization via discriminative summary reranking. arXiv preprint arXiv:1507.02062.",
|
| 1614 |
+
"Xiaojun Wan, Huiying Li, and Jianguo Xiao. 2010. Cross-language document summarization based on machine translation quality prediction. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 917-926.",
|
| 1615 |
+
"Xiaojun Wan, Fuli Luo, Xue Sun, Songfang Huang, and Jin-ge Yao. 2019. Cross-language document summarization via extraction and ranking of multiple summaries. Knowledge and Information Systems, 58:481-499.",
|
| 1616 |
+
"Danqing Wang, Pengfei Liu, Yining Zheng, Xipeng Qiu, and Xuanjing Huang. 2020. Heterogeneous graph neural networks for extractive document summarization. arXiv preprint arXiv:2004.12393.",
|
| 1617 |
+
"Dingding Wang, Shenghuo Zhu, Tao Li, and Yihong Gong. 2009. Multi-document summarization using sentence-based topic models. In Proceedings of the ACL-IJCNLP 2009 conference short papers, pages 297-300.",
|
| 1618 |
+
"Wen Xiao, Iz Beltagy, Giuseppe Carenini, and Arman Cohan. 2022. PRIMERA: pyramid-based masked sentence pre-training for multi-document summarization. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2022, Dublin, Ireland, May 22-27, 2022, pages 5245-5263. Association for Computational Linguistics."
|
| 1619 |
+
],
|
| 1620 |
+
"bbox": [
|
| 1621 |
+
510,
|
| 1622 |
+
85,
|
| 1623 |
+
882,
|
| 1624 |
+
919
|
| 1625 |
+
],
|
| 1626 |
+
"page_idx": 9
|
| 1627 |
+
},
|
| 1628 |
+
{
|
| 1629 |
+
"type": "page_number",
|
| 1630 |
+
"text": "9264",
|
| 1631 |
+
"bbox": [
|
| 1632 |
+
480,
|
| 1633 |
+
928,
|
| 1634 |
+
519,
|
| 1635 |
+
940
|
| 1636 |
+
],
|
| 1637 |
+
"page_idx": 9
|
| 1638 |
+
},
|
| 1639 |
+
{
|
| 1640 |
+
"type": "list",
|
| 1641 |
+
"sub_type": "ref_text",
|
| 1642 |
+
"list_items": [
|
| 1643 |
+
"L Xue. 2020. mt5: A massively multilingual pretrained text-to-text transformer. arXiv preprint arXiv:2010.11934.",
|
| 1644 |
+
"Michihiro Yasunaga, Rui Zhang, Kshitijh Meelu, Ayush Parek, Krishnan Srinivasan, and Dragomir Radev. 2017. Graph-based neural multi-document summarization. arXiv preprint arXiv:1706.06681.",
|
| 1645 |
+
"Jingqing Zhang, Yao Zhao, Mohammad Saleh, and Peter J. Liu. 2019. Pegasus: Pre-training with extracted gap-sentences for abstractive summarization. Preprint, arXiv:1912.08777.",
|
| 1646 |
+
"Junnan Zhu, Qian Wang, Yining Wang, Yu Zhou, Jiajun Zhang, Shaonan Wang, and Chengqing Zong. 2019. NCLS: Neural cross-lingual summarization. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3054–3064, Hong Kong, China. Association for Computational Linguistics.",
|
| 1647 |
+
"Markus Zopf. 2018. Auto-hmds: Automatic construction of a large heterogeneous multilingual multi-document summarization corpus. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)."
|
| 1648 |
+
],
|
| 1649 |
+
"bbox": [
|
| 1650 |
+
114,
|
| 1651 |
+
85,
|
| 1652 |
+
490,
|
| 1653 |
+
455
|
| 1654 |
+
],
|
| 1655 |
+
"page_idx": 10
|
| 1656 |
+
},
|
| 1657 |
+
{
|
| 1658 |
+
"type": "page_number",
|
| 1659 |
+
"text": "9265",
|
| 1660 |
+
"bbox": [
|
| 1661 |
+
480,
|
| 1662 |
+
928,
|
| 1663 |
+
519,
|
| 1664 |
+
940
|
| 1665 |
+
],
|
| 1666 |
+
"page_idx": 10
|
| 1667 |
+
}
|
| 1668 |
+
]
|
2025/A Mixed-Language Multi-Document News Summarization Dataset and a Graphs-Based Extract-Generate Model/a6e57a98-a206-4f15-a537-4cd8fddecece_model.json
ADDED
|
@@ -0,0 +1,2224 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[
|
| 2 |
+
[
|
| 3 |
+
{
|
| 4 |
+
"type": "title",
|
| 5 |
+
"bbox": [
|
| 6 |
+
0.131,
|
| 7 |
+
0.084,
|
| 8 |
+
0.867,
|
| 9 |
+
0.123
|
| 10 |
+
],
|
| 11 |
+
"angle": 0,
|
| 12 |
+
"content": "A Mixed-Language Multi-Document News Summarization Dataset and a Graphs-Based Extract-Generate Model"
|
| 13 |
+
},
|
| 14 |
+
{
|
| 15 |
+
"type": "text",
|
| 16 |
+
"bbox": [
|
| 17 |
+
0.276,
|
| 18 |
+
0.133,
|
| 19 |
+
0.728,
|
| 20 |
+
0.169
|
| 21 |
+
],
|
| 22 |
+
"angle": 0,
|
| 23 |
+
"content": "Shengxiang Gao\\(^{1,2}\\), Fang Nan\\(^{1,2}\\), Yongbing Zhang\\(^{1,2}\\), Yuxin Huang\\(^{1,2}\\), Kaiwen Tan\\(^{1,2*}\\), Zhengtao Yu\\(^{1,2}\\)"
|
| 24 |
+
},
|
| 25 |
+
{
|
| 26 |
+
"type": "text",
|
| 27 |
+
"bbox": [
|
| 28 |
+
0.224,
|
| 29 |
+
0.172,
|
| 30 |
+
0.774,
|
| 31 |
+
0.24
|
| 32 |
+
],
|
| 33 |
+
"angle": 0,
|
| 34 |
+
"content": "\\(^{1}\\)Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming, China \\(^{2}\\)Yunnan Key Laboratory of Artificial Intelligence, Kunming, China {kwtan}@kust.edu.cn"
|
| 35 |
+
},
|
| 36 |
+
{
|
| 37 |
+
"type": "title",
|
| 38 |
+
"bbox": [
|
| 39 |
+
0.261,
|
| 40 |
+
0.261,
|
| 41 |
+
0.341,
|
| 42 |
+
0.276
|
| 43 |
+
],
|
| 44 |
+
"angle": 0,
|
| 45 |
+
"content": "Abstract"
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"type": "text",
|
| 49 |
+
"bbox": [
|
| 50 |
+
0.142,
|
| 51 |
+
0.291,
|
| 52 |
+
0.461,
|
| 53 |
+
0.603
|
| 54 |
+
],
|
| 55 |
+
"angle": 0,
|
| 56 |
+
"content": "Existing research on news summarization primarily focuses on single-language single-document (SLSD), single-language multi-document (SLMD) or cross-language single-document (CLSD). However, in real-world scenarios, news about an international event often involves multiple documents in different languages, i.e., mixed-language multi-document (MLMD). Therefore, summarizing MLMD news is of great significance. However, the lack of datasets for MLMD news summarization has constrained the development of research in this area. To fill this gap, we construct a mixed-language multi-document news summarization dataset (MLMD-news), which contains four different languages and 10,992 source document cluster and target summary pairs. Additionally, we propose a graph-based extract-generate model and benchmark various methods on the MLMD-news dataset and publicly release our dataset and code<sup>1</sup>, aiming to advance research in summarization within MLMD scenarios."
|
| 57 |
+
},
|
| 58 |
+
{
|
| 59 |
+
"type": "title",
|
| 60 |
+
"bbox": [
|
| 61 |
+
0.115,
|
| 62 |
+
0.618,
|
| 63 |
+
0.26,
|
| 64 |
+
0.633
|
| 65 |
+
],
|
| 66 |
+
"angle": 0,
|
| 67 |
+
"content": "1 Introduction"
|
| 68 |
+
},
|
| 69 |
+
{
|
| 70 |
+
"type": "text",
|
| 71 |
+
"bbox": [
|
| 72 |
+
0.113,
|
| 73 |
+
0.644,
|
| 74 |
+
0.49,
|
| 75 |
+
0.886
|
| 76 |
+
],
|
| 77 |
+
"angle": 0,
|
| 78 |
+
"content": "The news summarization task aims to simplify and condense a large volume of news content through automated methods, extracting key information and main viewpoints so that readers can quickly grasp the core content of the news. Existing research on news summarization primarily focuses on single-language single-document (SLSD)(Svore et al., 2007; Litvak and Last, 2008; Liu and Lapata, 2019), single-language multi-document (SLMD)(Haghighi and Vanderwende, 2009; Yasunaga et al., 2017; Wang et al., 2009) and cross-language single-document (CLSD) (Wan et al., 2010; Wan, 2011; Wan et al., 2019). However, in reality, many news articles, especially international news, appear in the form of mixed"
|
| 79 |
+
},
|
| 80 |
+
{
|
| 81 |
+
"type": "image",
|
| 82 |
+
"bbox": [
|
| 83 |
+
0.514,
|
| 84 |
+
0.259,
|
| 85 |
+
0.693,
|
| 86 |
+
0.424
|
| 87 |
+
],
|
| 88 |
+
"angle": 0,
|
| 89 |
+
"content": null
|
| 90 |
+
},
|
| 91 |
+
{
|
| 92 |
+
"type": "image",
|
| 93 |
+
"bbox": [
|
| 94 |
+
0.7,
|
| 95 |
+
0.259,
|
| 96 |
+
0.879,
|
| 97 |
+
0.423
|
| 98 |
+
],
|
| 99 |
+
"angle": 0,
|
| 100 |
+
"content": null
|
| 101 |
+
},
|
| 102 |
+
{
|
| 103 |
+
"type": "image_caption",
|
| 104 |
+
"bbox": [
|
| 105 |
+
0.509,
|
| 106 |
+
0.434,
|
| 107 |
+
0.884,
|
| 108 |
+
0.52
|
| 109 |
+
],
|
| 110 |
+
"angle": 0,
|
| 111 |
+
"content": "Figure 1: The diagram of SLSD, SLMD, CLSD and MLMD. Each rounded rectangle represents a source document, while the pointed rectangle represents the target summary. \"En\", \"De\", \"Fr\" and \"Es\" indicate that the text is in English, German, French, and Spanish, respectively."
|
| 112 |
+
},
|
| 113 |
+
{
|
| 114 |
+
"type": "text",
|
| 115 |
+
"bbox": [
|
| 116 |
+
0.508,
|
| 117 |
+
0.55,
|
| 118 |
+
0.884,
|
| 119 |
+
0.596
|
| 120 |
+
],
|
| 121 |
+
"angle": 0,
|
| 122 |
+
"content": "language multi-document (MLMD). Figure 1 illustrates the four tasks: SLSD, SLMD, CLSD, and MLMD."
|
| 123 |
+
},
|
| 124 |
+
{
|
| 125 |
+
"type": "text",
|
| 126 |
+
"bbox": [
|
| 127 |
+
0.508,
|
| 128 |
+
0.6,
|
| 129 |
+
0.885,
|
| 130 |
+
0.922
|
| 131 |
+
],
|
| 132 |
+
"angle": 0,
|
| 133 |
+
"content": "It is noteworthy that, with the advancement of multi-language models such as mBART (Tang et al., 2020) and GPT (Floridi and Chiriatti, 2020; Achiam et al., 2023), a task referred to as multi-language multi-document news summarization has recently emerged (Giannakopoulos, 2013; Zopf, 2018; Mascarell et al., 2023). In the task, although the languages of different source document clusters vary, each individual source document cluster consists of multiple documents in the same language. Therefore, in each instance of summary generation for this type of task, it essentially falls under the category SLMD. In contrast, in MLMD, each individual source document cluster is composed of multiple documents in different languages. From this perspective, MLMD is more challenging than multi-language multi-document. The latter requires the model to have the capability to understand multiple documents in the current language during a single summary generation. In contrast,"
|
| 134 |
+
},
|
| 135 |
+
{
|
| 136 |
+
"type": "page_footnote",
|
| 137 |
+
"bbox": [
|
| 138 |
+
0.136,
|
| 139 |
+
0.894,
|
| 140 |
+
0.428,
|
| 141 |
+
0.921
|
| 142 |
+
],
|
| 143 |
+
"angle": 0,
|
| 144 |
+
"content": "\\*Corresponding author. 1https://github.com/Southnf9/MLMD-news"
|
| 145 |
+
},
|
| 146 |
+
{
|
| 147 |
+
"type": "page_number",
|
| 148 |
+
"bbox": [
|
| 149 |
+
0.48,
|
| 150 |
+
0.928,
|
| 151 |
+
0.521,
|
| 152 |
+
0.941
|
| 153 |
+
],
|
| 154 |
+
"angle": 0,
|
| 155 |
+
"content": "9255"
|
| 156 |
+
},
|
| 157 |
+
{
|
| 158 |
+
"type": "footer",
|
| 159 |
+
"bbox": [
|
| 160 |
+
0.018,
|
| 161 |
+
0.946,
|
| 162 |
+
0.978,
|
| 163 |
+
0.959
|
| 164 |
+
],
|
| 165 |
+
"angle": 0,
|
| 166 |
+
"content": "Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies"
|
| 167 |
+
},
|
| 168 |
+
{
|
| 169 |
+
"type": "footer",
|
| 170 |
+
"bbox": [
|
| 171 |
+
0.365,
|
| 172 |
+
0.959,
|
| 173 |
+
0.634,
|
| 174 |
+
0.972
|
| 175 |
+
],
|
| 176 |
+
"angle": 0,
|
| 177 |
+
"content": "(Volume 1: Long Papers), pages 9255-9265"
|
| 178 |
+
},
|
| 179 |
+
{
|
| 180 |
+
"type": "footer",
|
| 181 |
+
"bbox": [
|
| 182 |
+
0.275,
|
| 183 |
+
0.973,
|
| 184 |
+
0.724,
|
| 185 |
+
0.986
|
| 186 |
+
],
|
| 187 |
+
"angle": 0,
|
| 188 |
+
"content": "April 29 - May 4, 2025 ©2025 Association for Computational Linguistics"
|
| 189 |
+
}
|
| 190 |
+
],
|
| 191 |
+
[
|
| 192 |
+
{
|
| 193 |
+
"type": "text",
|
| 194 |
+
"bbox": [
|
| 195 |
+
0.113,
|
| 196 |
+
0.085,
|
| 197 |
+
0.489,
|
| 198 |
+
0.149
|
| 199 |
+
],
|
| 200 |
+
"angle": 0,
|
| 201 |
+
"content": "MLMD requires the model to simultaneously possess the ability to understand multiple languages and multiple documents within a single summary generation."
|
| 202 |
+
},
|
| 203 |
+
{
|
| 204 |
+
"type": "text",
|
| 205 |
+
"bbox": [
|
| 206 |
+
0.117,
|
| 207 |
+
0.151,
|
| 208 |
+
0.49,
|
| 209 |
+
0.486
|
| 210 |
+
],
|
| 211 |
+
"angle": 0,
|
| 212 |
+
"content": "However, the lack of MLMD news datasets has hindered progress in this field. Therefore, we first construct a MLMD-news dataset. This dataset includes documents in four languages: English, German, French, and Spanish, with a total of 10,992 source document clusters and corresponding target summaries. Each source document cluster is composed of multiple documents in different languages, and the corresponding target summary is in English. Additionally, we propose a graph-based extract-generate model for the MLMD task. This model first uses an extractor based on graph neural networks to extract key sentences from a source document cluster, and then employs a generator based on pre-trained models to generate the target summary based on these key sentences. Finally, we benchmark various methods on the MLMD-news dataset and publicly release our dataset and code to advance research in summarization within MLMD scenarios. The contributions of this paper are summarized as follows:"
|
| 213 |
+
},
|
| 214 |
+
{
|
| 215 |
+
"type": "text",
|
| 216 |
+
"bbox": [
|
| 217 |
+
0.137,
|
| 218 |
+
0.496,
|
| 219 |
+
0.487,
|
| 220 |
+
0.559
|
| 221 |
+
],
|
| 222 |
+
"angle": 0,
|
| 223 |
+
"content": "- We construct the first mixed-language multi-document (MLMD) dataset, where each source document cluster contains multiple news documents in different languages."
|
| 224 |
+
},
|
| 225 |
+
{
|
| 226 |
+
"type": "text",
|
| 227 |
+
"bbox": [
|
| 228 |
+
0.137,
|
| 229 |
+
0.571,
|
| 230 |
+
0.485,
|
| 231 |
+
0.601
|
| 232 |
+
],
|
| 233 |
+
"angle": 0,
|
| 234 |
+
"content": "- We propose a graph-based extract-generate model as a benchmark for MLMD."
|
| 235 |
+
},
|
| 236 |
+
{
|
| 237 |
+
"type": "text",
|
| 238 |
+
"bbox": [
|
| 239 |
+
0.137,
|
| 240 |
+
0.612,
|
| 241 |
+
0.487,
|
| 242 |
+
0.674
|
| 243 |
+
],
|
| 244 |
+
"angle": 0,
|
| 245 |
+
"content": "- We perform benchmark experiments on the MLMD using various methods and have publicly released the dataset and code to advance research in this field."
|
| 246 |
+
},
|
| 247 |
+
{
|
| 248 |
+
"type": "list",
|
| 249 |
+
"bbox": [
|
| 250 |
+
0.137,
|
| 251 |
+
0.496,
|
| 252 |
+
0.487,
|
| 253 |
+
0.674
|
| 254 |
+
],
|
| 255 |
+
"angle": 0,
|
| 256 |
+
"content": null
|
| 257 |
+
},
|
| 258 |
+
{
|
| 259 |
+
"type": "title",
|
| 260 |
+
"bbox": [
|
| 261 |
+
0.115,
|
| 262 |
+
0.687,
|
| 263 |
+
0.271,
|
| 264 |
+
0.703
|
| 265 |
+
],
|
| 266 |
+
"angle": 0,
|
| 267 |
+
"content": "2 Related Work"
|
| 268 |
+
},
|
| 269 |
+
{
|
| 270 |
+
"type": "text",
|
| 271 |
+
"bbox": [
|
| 272 |
+
0.113,
|
| 273 |
+
0.713,
|
| 274 |
+
0.489,
|
| 275 |
+
0.744
|
| 276 |
+
],
|
| 277 |
+
"angle": 0,
|
| 278 |
+
"content": "The related work in news summarization research primarily focuses on three areas, as detailed below:"
|
| 279 |
+
},
|
| 280 |
+
{
|
| 281 |
+
"type": "text",
|
| 282 |
+
"bbox": [
|
| 283 |
+
0.113,
|
| 284 |
+
0.746,
|
| 285 |
+
0.49,
|
| 286 |
+
0.921
|
| 287 |
+
],
|
| 288 |
+
"angle": 0,
|
| 289 |
+
"content": "Single-Language Single-Document Summarization (SLSD): As shown in Figure 1.(a), the SLSD news summarization task takes a source document as input and outputs a target summary in the same language. Existing methods are mainly divided into two categories: extractive and abstractive. Extractive summarization constructs the target summary by directly selecting key sentences or paragraphs from the source document, such as TextRank (Mihalcea and Tarau, 2004) and DeepSumm (Joshi et al., 2023). Abstractive summarization, on"
|
| 290 |
+
},
|
| 291 |
+
{
|
| 292 |
+
"type": "text",
|
| 293 |
+
"bbox": [
|
| 294 |
+
0.508,
|
| 295 |
+
0.085,
|
| 296 |
+
0.883,
|
| 297 |
+
0.165
|
| 298 |
+
],
|
| 299 |
+
"angle": 0,
|
| 300 |
+
"content": "the other hand, involves first understanding the content of the source document and then generating new summary sentences for the target summary, such as BERTSUM (Liu and Lapata, 2019) and COGITOERGOSUMM (Frisoni et al., 2023)."
|
| 301 |
+
},
|
| 302 |
+
{
|
| 303 |
+
"type": "text",
|
| 304 |
+
"bbox": [
|
| 305 |
+
0.508,
|
| 306 |
+
0.166,
|
| 307 |
+
0.885,
|
| 308 |
+
0.438
|
| 309 |
+
],
|
| 310 |
+
"angle": 0,
|
| 311 |
+
"content": "Cross-Language Single-Document Summarization (CLSD): As shown in Figure 1.(b), the CLSD news summarization task takes a source document as input and produces a target summary in a different language. Existing research is primarily divided into pipeline-based and end-to-end. Traditional CLSD methods typically use a pipeline-based methods (Boudin et al., 2011; Linhares Pontes et al., 2018), where the source document is first translated and then summarized, or the summary is generated first and then translated into the target language. In recent years, researchers have increasingly focused on end-to-end CLSD methods (Le, 2024; Cai and Yuan, 2024), which can directly generate summaries in the target language, significantly reducing the risk of error propagation."
|
| 312 |
+
},
|
| 313 |
+
{
|
| 314 |
+
"type": "text",
|
| 315 |
+
"bbox": [
|
| 316 |
+
0.508,
|
| 317 |
+
0.439,
|
| 318 |
+
0.884,
|
| 319 |
+
0.728
|
| 320 |
+
],
|
| 321 |
+
"angle": 0,
|
| 322 |
+
"content": "Single-Language Multi-Document Summarization (SLMD): As shown in Figure 1.(c), the SLMD news summarization task takes a source document cluster as input, which contains multiple documents, and the output is a target summary in the same language. Existing methods can be categorized into extractive, abstractive, and hybrid. In the early days, due to the small sample size of SLMD datasets like DUC 2004 (Over and Yen, 2004), research on multi-document summarization primarily relied on extractive methods (Mei and Chen, 2012; Wan et al., 2015). In recent years, the emergence of large-scale SLMD datasets, such as Multi-News(Fabbri et al., 2019), has accelerated the development of abstractive (Jin and Wan, 2020; Liu et al., 2021) and hybrid SLMD news summarization (Celikyilmaz and Hakkani-Tur, 2010; Song et al., 2022; Ghadimi and Beigy, 2022)."
|
| 323 |
+
},
|
| 324 |
+
{
|
| 325 |
+
"type": "text",
|
| 326 |
+
"bbox": [
|
| 327 |
+
0.508,
|
| 328 |
+
0.729,
|
| 329 |
+
0.884,
|
| 330 |
+
0.921
|
| 331 |
+
],
|
| 332 |
+
"angle": 0,
|
| 333 |
+
"content": "Recently, with the development of multi-language models such as mBART (Tang et al., 2020) and GPT (Floridi and Chiriatti, 2020; Achiam et al., 2023), a task known as multi-language multi-document news summarization (Giannakopoulos, 2013; Zopf, 2018; Mascarell et al., 2023) has emerged within the SLMD paradigm. This task involves inputs and outputs similar to those in SLMD, where a source document cluster is input and a target summary in the same language is produced. The difference lies in that the languages of different source document clusters can"
|
| 334 |
+
},
|
| 335 |
+
{
|
| 336 |
+
"type": "page_number",
|
| 337 |
+
"bbox": [
|
| 338 |
+
0.481,
|
| 339 |
+
0.929,
|
| 340 |
+
0.521,
|
| 341 |
+
0.941
|
| 342 |
+
],
|
| 343 |
+
"angle": 0,
|
| 344 |
+
"content": "9256"
|
| 345 |
+
}
|
| 346 |
+
],
|
| 347 |
+
[
|
| 348 |
+
{
|
| 349 |
+
"type": "image",
|
| 350 |
+
"bbox": [
|
| 351 |
+
0.117,
|
| 352 |
+
0.085,
|
| 353 |
+
0.486,
|
| 354 |
+
0.281
|
| 355 |
+
],
|
| 356 |
+
"angle": 0,
|
| 357 |
+
"content": null
|
| 358 |
+
},
|
| 359 |
+
{
|
| 360 |
+
"type": "image_caption",
|
| 361 |
+
"bbox": [
|
| 362 |
+
0.113,
|
| 363 |
+
0.293,
|
| 364 |
+
0.49,
|
| 365 |
+
0.478
|
| 366 |
+
],
|
| 367 |
+
"angle": 0,
|
| 368 |
+
"content": "Figure 2: The diagram illustrates the construction process of the MLMD-news dataset. First, a round-trip translation (RTT) strategy is employed to translate each news document in the source document clusters of the Multi-News dataset into multiple languages and then back into the original language. This process allows the calculation of the ROUGE-1 score matrix for the document cluster. Based on this score matrix, a greedy selection strategy is used to assign a corresponding language to each news document. The original content of the news document is then replaced with the translated content in the assigned language, resulting in a source document cluster with mixed languages."
|
| 369 |
+
},
|
| 370 |
+
{
|
| 371 |
+
"type": "text",
|
| 372 |
+
"bbox": [
|
| 373 |
+
0.114,
|
| 374 |
+
0.523,
|
| 375 |
+
0.489,
|
| 376 |
+
0.556
|
| 377 |
+
],
|
| 378 |
+
"angle": 0,
|
| 379 |
+
"content": "vary, thereby further requiring the model to have multilingual understanding capabilities."
|
| 380 |
+
},
|
| 381 |
+
{
|
| 382 |
+
"type": "title",
|
| 383 |
+
"bbox": [
|
| 384 |
+
0.115,
|
| 385 |
+
0.6,
|
| 386 |
+
0.371,
|
| 387 |
+
0.615
|
| 388 |
+
],
|
| 389 |
+
"angle": 0,
|
| 390 |
+
"content": "3 The MLMD-news dataset"
|
| 391 |
+
},
|
| 392 |
+
{
|
| 393 |
+
"type": "text",
|
| 394 |
+
"bbox": [
|
| 395 |
+
0.113,
|
| 396 |
+
0.648,
|
| 397 |
+
0.49,
|
| 398 |
+
0.92
|
| 399 |
+
],
|
| 400 |
+
"angle": 0,
|
| 401 |
+
"content": "The overall process of constructing the MLMD-news dataset is illustrated in Figure 2. The MLMD-news dataset is built upon the Multi-News dataset, which is a well-known and widely used English multi-document summarization dataset. The construction process employs a round-trip translation strategy and a greedy selection strategy, to approximate real-world cases. Similar methods, like the NCLS dataset (Zhu et al., 2019) in cross-language summarization, have been widely accepted and used. The main goal of the round-trip translation strategy is to calculate a ROUGE-1 score matrix that reflects translation quality, while the greedy selection strategy is used to assign the most suitable language to each news document in the source document cluster and make the necessary replacements."
|
| 402 |
+
},
|
| 403 |
+
{
|
| 404 |
+
"type": "title",
|
| 405 |
+
"bbox": [
|
| 406 |
+
0.509,
|
| 407 |
+
0.085,
|
| 408 |
+
0.811,
|
| 409 |
+
0.102
|
| 410 |
+
],
|
| 411 |
+
"angle": 0,
|
| 412 |
+
"content": "3.1 Round-trip Translation Strategy"
|
| 413 |
+
},
|
| 414 |
+
{
|
| 415 |
+
"type": "text",
|
| 416 |
+
"bbox": [
|
| 417 |
+
0.508,
|
| 418 |
+
0.107,
|
| 419 |
+
0.885,
|
| 420 |
+
0.267
|
| 421 |
+
],
|
| 422 |
+
"angle": 0,
|
| 423 |
+
"content": "The round-trip translation strategy first uses machine translation services² to translate text from the original language into another language (forward translation) and then uses machine translation services again to translate the text back from the other language into the original language (back translation). This strategy has been utilized by Zhu et al. (2019) to construct cross-language single-document summarization datasets from single-language single-document summarization datasets."
|
| 424 |
+
},
|
| 425 |
+
{
|
| 426 |
+
"type": "text",
|
| 427 |
+
"bbox": [
|
| 428 |
+
0.508,
|
| 429 |
+
0.268,
|
| 430 |
+
0.885,
|
| 431 |
+
0.526
|
| 432 |
+
],
|
| 433 |
+
"angle": 0,
|
| 434 |
+
"content": "Therefore, we use the round-trip translation strategy to construct MLMD-news dataset. First, the original English news documents from the MultiNews dataset are translated into Spanish, French, and German through forward translation. Then, these translated documents are back-translated into English. The English documents obtained from the back translation of each language are compared with the original English news documents, and ROUGE-1 scores are calculated. If the ROUGE-1 score for a particular language is below a threshold, it is set to zero. Conversely, if the ROUGE-1 score is equal to or above the threshold, the score is retained, resulting in a ROUGE-1 score matrix (where each row represents a document and each column corresponds to a language)."
|
| 435 |
+
},
|
| 436 |
+
{
|
| 437 |
+
"type": "title",
|
| 438 |
+
"bbox": [
|
| 439 |
+
0.509,
|
| 440 |
+
0.538,
|
| 441 |
+
0.76,
|
| 442 |
+
0.554
|
| 443 |
+
],
|
| 444 |
+
"angle": 0,
|
| 445 |
+
"content": "3.2 Greedy Selection Strategy"
|
| 446 |
+
},
|
| 447 |
+
{
|
| 448 |
+
"type": "text",
|
| 449 |
+
"bbox": [
|
| 450 |
+
0.508,
|
| 451 |
+
0.559,
|
| 452 |
+
0.885,
|
| 453 |
+
0.897
|
| 454 |
+
],
|
| 455 |
+
"angle": 0,
|
| 456 |
+
"content": "As shown in Figure 2, after obtaining the ROUGE-1 score matrix, a greedy selection strategy is used to assign a language to each news document in the document cluster from Multi-News dataset. Specifically, this involves first identifying the row and column of the maximum value in the matrix, and assigning the language indicated by the column to the document indicated by the row. The corresponding row and column are then removed to form a new submatrix. This process is repeated until all news documents have been assigned a language. If at any step all values in the matrix are found to be zero, the language of remaining news documents in the submatrix is assigned as English. After completing the language assignment, each news document is transformed into the assigned language using the forward translation of the round-trip translation, replacing the content of the source document. If the assigned language is English, the document remains in its original English form. This results in a mixed-language document cluster."
|
| 457 |
+
},
|
| 458 |
+
{
|
| 459 |
+
"type": "page_footnote",
|
| 460 |
+
"bbox": [
|
| 461 |
+
0.53,
|
| 462 |
+
0.907,
|
| 463 |
+
0.799,
|
| 464 |
+
0.921
|
| 465 |
+
],
|
| 466 |
+
"angle": 0,
|
| 467 |
+
"content": "\\(^{2}\\)https://cloud.google.com/translate"
|
| 468 |
+
},
|
| 469 |
+
{
|
| 470 |
+
"type": "page_number",
|
| 471 |
+
"bbox": [
|
| 472 |
+
0.481,
|
| 473 |
+
0.928,
|
| 474 |
+
0.521,
|
| 475 |
+
0.941
|
| 476 |
+
],
|
| 477 |
+
"angle": 0,
|
| 478 |
+
"content": "9257"
|
| 479 |
+
}
|
| 480 |
+
],
|
| 481 |
+
[
|
| 482 |
+
{
|
| 483 |
+
"type": "table",
|
| 484 |
+
"bbox": [
|
| 485 |
+
0.116,
|
| 486 |
+
0.082,
|
| 487 |
+
0.495,
|
| 488 |
+
0.374
|
| 489 |
+
],
|
| 490 |
+
"angle": 0,
|
| 491 |
+
"content": "<table><tr><td></td><td></td><td>Train</td><td>Vaild</td><td>Test</td></tr><tr><td rowspan=\"6\">Total</td><td>#</td><td>8444</td><td>1277</td><td>1271</td></tr><tr><td>Avg.Doc</td><td>2.79</td><td>2.75</td><td>2.71</td></tr><tr><td>Avg ClusterWords</td><td>2442.97</td><td>2457.48</td><td>2255.81</td></tr><tr><td>Avg ClusterSents</td><td>84.14</td><td>85.49</td><td>77.85</td></tr><tr><td>Avg.SumWords</td><td>269.17</td><td>268.32</td><td>265.70</td></tr><tr><td>Avg.SumSents</td><td>9.70</td><td>9.60</td><td>9.56</td></tr><tr><td rowspan=\"3\">En</td><td>Count</td><td>7088</td><td>1027</td><td>1009</td></tr><tr><td>Avg.DocWords</td><td>653.19</td><td>732.79</td><td>706.98</td></tr><tr><td>Avg.DocSents</td><td>24.52</td><td>27.18</td><td>26.00</td></tr><tr><td rowspan=\"3\">Fr</td><td>Count</td><td>5307</td><td>779</td><td>779</td></tr><tr><td>Avg.DocWords</td><td>1020.20</td><td>969.46</td><td>951.30</td></tr><tr><td>Avg.DocSents</td><td>32.99</td><td>31.93</td><td>31.53</td></tr><tr><td rowspan=\"3\">De</td><td>Count</td><td>4431</td><td>693</td><td>646</td></tr><tr><td>Avg.DocWords</td><td>981.05</td><td>1036.24</td><td>825.07</td></tr><tr><td>Avg.DocSents</td><td>32.96</td><td>35.34</td><td>26.93</td></tr><tr><td rowspan=\"3\">Es</td><td>Count</td><td>6769</td><td>1015</td><td>1009</td></tr><tr><td>Avg.DocWords</td><td>1047.78</td><td>1048.92</td><td>1015.58</td></tr><tr><td>Avg.DocSents</td><td>36.20</td><td>36.64</td><td>35.44</td></tr></table>"
|
| 492 |
+
},
|
| 493 |
+
{
|
| 494 |
+
"type": "table_footnote",
|
| 495 |
+
"bbox": [
|
| 496 |
+
0.113,
|
| 497 |
+
0.382,
|
| 498 |
+
0.49,
|
| 499 |
+
0.568
|
| 500 |
+
],
|
| 501 |
+
"angle": 0,
|
| 502 |
+
"content": "Table 1: Statistics of the MLMD-news dataset. \"#\" represents the number of source document cluster and target summary pairs. \"Avg.Doc\", \"Avg ClusterWords\" and \"Avg.ClusterSents\" indicate the average number of documents, average number of tokens, and average number of sentences per source document cluster, respectively. \"Avg.SumWords\" and \"Avg.SumSents\" denote the average number of tokens and average number of sentences in the target summary. \"Count\", \"Avg.DocWords\" and \"Avg.DocSents\" represent the total number of documents, average number of tokens per document, and average number of sentences per document, respectively."
|
| 503 |
+
},
|
| 504 |
+
{
|
| 505 |
+
"type": "text",
|
| 506 |
+
"bbox": [
|
| 507 |
+
0.113,
|
| 508 |
+
0.595,
|
| 509 |
+
0.489,
|
| 510 |
+
0.645
|
| 511 |
+
],
|
| 512 |
+
"angle": 0,
|
| 513 |
+
"content": "Finally, this mixed-language document cluster is combined with the original target summary to form an MLMD summary pair."
|
| 514 |
+
},
|
| 515 |
+
{
|
| 516 |
+
"type": "title",
|
| 517 |
+
"bbox": [
|
| 518 |
+
0.114,
|
| 519 |
+
0.658,
|
| 520 |
+
0.338,
|
| 521 |
+
0.673
|
| 522 |
+
],
|
| 523 |
+
"angle": 0,
|
| 524 |
+
"content": "3.3 Statistics and Analysis"
|
| 525 |
+
},
|
| 526 |
+
{
|
| 527 |
+
"type": "text",
|
| 528 |
+
"bbox": [
|
| 529 |
+
0.113,
|
| 530 |
+
0.681,
|
| 531 |
+
0.49,
|
| 532 |
+
0.922
|
| 533 |
+
],
|
| 534 |
+
"angle": 0,
|
| 535 |
+
"content": "Through the aforementioned process, we constructed the MLMD-news dataset, which contains 10,992 pairs of source document clusters and target summaries. The source document clusters include four languages: English, French, German, and Spanish, while the target summaries are all in English. The dataset was divided into training, validation, and test sets. Table 1 presents the statistical information of the MLMD-news dataset. Figure 3 shows the number of news documents in different languages across the training, validation, and test sets. Due to the quality control implemented through the round-trip translation strategy during processing, there are differences in the proportions of news documents in different languages."
|
| 536 |
+
},
|
| 537 |
+
{
|
| 538 |
+
"type": "image",
|
| 539 |
+
"bbox": [
|
| 540 |
+
0.514,
|
| 541 |
+
0.083,
|
| 542 |
+
0.883,
|
| 543 |
+
0.182
|
| 544 |
+
],
|
| 545 |
+
"angle": 0,
|
| 546 |
+
"content": null
|
| 547 |
+
},
|
| 548 |
+
{
|
| 549 |
+
"type": "image_caption",
|
| 550 |
+
"bbox": [
|
| 551 |
+
0.509,
|
| 552 |
+
0.195,
|
| 553 |
+
0.882,
|
| 554 |
+
0.224
|
| 555 |
+
],
|
| 556 |
+
"angle": 0,
|
| 557 |
+
"content": "Figure 3: The number of news documents in different languages across the training, validation, and test sets."
|
| 558 |
+
},
|
| 559 |
+
{
|
| 560 |
+
"type": "image",
|
| 561 |
+
"bbox": [
|
| 562 |
+
0.512,
|
| 563 |
+
0.244,
|
| 564 |
+
0.88,
|
| 565 |
+
0.362
|
| 566 |
+
],
|
| 567 |
+
"angle": 0,
|
| 568 |
+
"content": null
|
| 569 |
+
},
|
| 570 |
+
{
|
| 571 |
+
"type": "image_caption",
|
| 572 |
+
"bbox": [
|
| 573 |
+
0.508,
|
| 574 |
+
0.372,
|
| 575 |
+
0.883,
|
| 576 |
+
0.416
|
| 577 |
+
],
|
| 578 |
+
"angle": 0,
|
| 579 |
+
"content": "Figure 4: The proportion of the number of languages involved in the source document clusters across the training, validation, and test sets."
|
| 580 |
+
},
|
| 581 |
+
{
|
| 582 |
+
"type": "text",
|
| 583 |
+
"bbox": [
|
| 584 |
+
0.508,
|
| 585 |
+
0.447,
|
| 586 |
+
0.884,
|
| 587 |
+
0.608
|
| 588 |
+
],
|
| 589 |
+
"angle": 0,
|
| 590 |
+
"content": "The number of English and Spanish documents is roughly equal, French ranks third, while German has the fewest articles. Figure 4 illustrates the proportion of the number of languages involved in the source document clusters across the training, validation, and test sets. The proportions are relatively consistent, indicating a stable language distribution. Source document clusters containing 2 languages are the most common, while those containing 4 languages are the least common."
|
| 591 |
+
},
|
| 592 |
+
{
|
| 593 |
+
"type": "title",
|
| 594 |
+
"bbox": [
|
| 595 |
+
0.509,
|
| 596 |
+
0.631,
|
| 597 |
+
0.874,
|
| 598 |
+
0.647
|
| 599 |
+
],
|
| 600 |
+
"angle": 0,
|
| 601 |
+
"content": "4 Graph-based Extract-Generate Model"
|
| 602 |
+
},
|
| 603 |
+
{
|
| 604 |
+
"type": "text",
|
| 605 |
+
"bbox": [
|
| 606 |
+
0.507,
|
| 607 |
+
0.665,
|
| 608 |
+
0.885,
|
| 609 |
+
0.922
|
| 610 |
+
],
|
| 611 |
+
"angle": 0,
|
| 612 |
+
"content": "To the best of our knowledge, there is currently no method specifically designed for the MLMD task. The input for MLMD news summarization consists of multiple mixed-language documents, which presents two main challenges: the excessive length of the input and the complex relationships between multiple documents and languages. Therefore, we propose a graph-based extractive-generative model (as shown in Figure 5) as a baseline for this task. The extract-then-generate approach addresses the issue of long input, while the graph is used to model the complex relationships between multiple documents and languages. The model consists of three main modules: Graph Construction, Extractor, and Generator. This section will provide a detailed explanation of these three modules."
|
| 613 |
+
},
|
| 614 |
+
{
|
| 615 |
+
"type": "page_number",
|
| 616 |
+
"bbox": [
|
| 617 |
+
0.481,
|
| 618 |
+
0.928,
|
| 619 |
+
0.521,
|
| 620 |
+
0.941
|
| 621 |
+
],
|
| 622 |
+
"angle": 0,
|
| 623 |
+
"content": "9258"
|
| 624 |
+
}
|
| 625 |
+
],
|
| 626 |
+
[
|
| 627 |
+
{
|
| 628 |
+
"type": "image",
|
| 629 |
+
"bbox": [
|
| 630 |
+
0.161,
|
| 631 |
+
0.081,
|
| 632 |
+
0.845,
|
| 633 |
+
0.304
|
| 634 |
+
],
|
| 635 |
+
"angle": 0,
|
| 636 |
+
"content": null
|
| 637 |
+
},
|
| 638 |
+
{
|
| 639 |
+
"type": "image_caption",
|
| 640 |
+
"bbox": [
|
| 641 |
+
0.113,
|
| 642 |
+
0.314,
|
| 643 |
+
0.885,
|
| 644 |
+
0.371
|
| 645 |
+
],
|
| 646 |
+
"angle": 0,
|
| 647 |
+
"content": "Figure 5: The framework of the extract-generate model involves three main components. In the Graph Construction, mixed-language source document clusters are constructed into both homogeneous and heterogeneous graphs. The Extractor extracts key sentences from the source document cluster, while the Generator generates a summary based on the sentences extracted by the Extractor."
|
| 648 |
+
},
|
| 649 |
+
{
|
| 650 |
+
"type": "title",
|
| 651 |
+
"bbox": [
|
| 652 |
+
0.114,
|
| 653 |
+
0.397,
|
| 654 |
+
0.321,
|
| 655 |
+
0.412
|
| 656 |
+
],
|
| 657 |
+
"angle": 0,
|
| 658 |
+
"content": "4.1 Graph Construction"
|
| 659 |
+
},
|
| 660 |
+
{
|
| 661 |
+
"type": "text",
|
| 662 |
+
"bbox": [
|
| 663 |
+
0.113,
|
| 664 |
+
0.419,
|
| 665 |
+
0.49,
|
| 666 |
+
0.514
|
| 667 |
+
],
|
| 668 |
+
"angle": 0,
|
| 669 |
+
"content": "In order to model the complex relationships between multiple documents and languages, we constructed a homogeneous graph between sentences, as well as a heterogeneous graph between sentences and words, for each input mixed-language document cluster."
|
| 670 |
+
},
|
| 671 |
+
{
|
| 672 |
+
"type": "title",
|
| 673 |
+
"bbox": [
|
| 674 |
+
0.114,
|
| 675 |
+
0.527,
|
| 676 |
+
0.45,
|
| 677 |
+
0.543
|
| 678 |
+
],
|
| 679 |
+
"angle": 0,
|
| 680 |
+
"content": "4.1.1 Homogeneous Graph Construction"
|
| 681 |
+
},
|
| 682 |
+
{
|
| 683 |
+
"type": "text",
|
| 684 |
+
"bbox": [
|
| 685 |
+
0.113,
|
| 686 |
+
0.548,
|
| 687 |
+
0.49,
|
| 688 |
+
0.678
|
| 689 |
+
],
|
| 690 |
+
"angle": 0,
|
| 691 |
+
"content": "Let \\( G_{1} = \\{V_{1},E_{1}\\} \\) denote a homogeneous graph, where the node set \\( V_{1} = \\{s_{1},s_{2},\\ldots ,s_{n}\\} \\) corresponds to the sentences within the document cluster, and the edge set \\( E_{1} = \\{e_{1,1},e_{1,3},\\dots ,e_{n,n}\\} \\) denotes the connections between sentences that share common words. Moreover, we refer to BERTSUM (Liu and Lapata, 2019) to obtain the initial representation of nodes \\( H_s^{(0)} \\)."
|
| 692 |
+
},
|
| 693 |
+
{
|
| 694 |
+
"type": "title",
|
| 695 |
+
"bbox": [
|
| 696 |
+
0.114,
|
| 697 |
+
0.691,
|
| 698 |
+
0.456,
|
| 699 |
+
0.707
|
| 700 |
+
],
|
| 701 |
+
"angle": 0,
|
| 702 |
+
"content": "4.1.2 Heterogeneous Graph Construction"
|
| 703 |
+
},
|
| 704 |
+
{
|
| 705 |
+
"type": "text",
|
| 706 |
+
"bbox": [
|
| 707 |
+
0.113,
|
| 708 |
+
0.711,
|
| 709 |
+
0.49,
|
| 710 |
+
0.921
|
| 711 |
+
],
|
| 712 |
+
"angle": 0,
|
| 713 |
+
"content": "Let \\( G_{2} = \\{V_{2}, E_{2}\\} \\) denote a heterogeneous graph, where \\( V_{2} \\) is the set of nodes and \\( E_{2} \\) is the set of edges. In this graph, the nodes can be represented as \\( V_{2} = V_{1} \\cup V_{w} \\), where \\( V_{w} = \\{w_{1}, w_{2}, \\ldots, w_{k}\\} \\) is the set of words. The edges, denoted as \\( E_{2} = \\{e_{1,1}, \\ldots, e_{1,k}, \\ldots, e_{n,1}, \\ldots, e_{n,k}\\} \\), represent the connections between the \\( i^{th} \\) sentence and the \\( j^{th} \\) word, with edge weights determined by TF-IDF (Aizawa, 2003). We use mBERT to initialize the static embeddings of word node as \\( H_{w}^{(0)} \\). To obtain the representations of sentence node, We first concatenate the tokens of each word corresponding to a sentence and then input them into mBERT to create"
|
| 714 |
+
},
|
| 715 |
+
{
|
| 716 |
+
"type": "text",
|
| 717 |
+
"bbox": [
|
| 718 |
+
0.508,
|
| 719 |
+
0.396,
|
| 720 |
+
0.885,
|
| 721 |
+
0.558
|
| 722 |
+
],
|
| 723 |
+
"angle": 0,
|
| 724 |
+
"content": "the initial representation of the sentence node as \\( H_{s} \\). Next, we employ a convolutional neural network (CNN) to capture local information within the sentence. To extract sentence-level features, we apply a bidirectional long short-term memory (BiLSTM) network to capture contextual dependencies. Finally, by concatenating the outputs of the CNN and BiLSTM, we generate the representation of sentence node that encompasses both intra-sentence and inter-sentence information, denoted as \\( H_{s}^{(0)} \\)."
|
| 725 |
+
},
|
| 726 |
+
{
|
| 727 |
+
"type": "title",
|
| 728 |
+
"bbox": [
|
| 729 |
+
0.509,
|
| 730 |
+
0.57,
|
| 731 |
+
0.634,
|
| 732 |
+
0.584
|
| 733 |
+
],
|
| 734 |
+
"angle": 0,
|
| 735 |
+
"content": "4.2 Extractor"
|
| 736 |
+
},
|
| 737 |
+
{
|
| 738 |
+
"type": "text",
|
| 739 |
+
"bbox": [
|
| 740 |
+
0.508,
|
| 741 |
+
0.591,
|
| 742 |
+
0.884,
|
| 743 |
+
0.622
|
| 744 |
+
],
|
| 745 |
+
"angle": 0,
|
| 746 |
+
"content": "In the extractor, we first perform sentence representation learning and then extract key sentences."
|
| 747 |
+
},
|
| 748 |
+
{
|
| 749 |
+
"type": "title",
|
| 750 |
+
"bbox": [
|
| 751 |
+
0.509,
|
| 752 |
+
0.631,
|
| 753 |
+
0.842,
|
| 754 |
+
0.647
|
| 755 |
+
],
|
| 756 |
+
"angle": 0,
|
| 757 |
+
"content": "4.2.1 Sentence Representation Learning"
|
| 758 |
+
},
|
| 759 |
+
{
|
| 760 |
+
"type": "text",
|
| 761 |
+
"bbox": [
|
| 762 |
+
0.508,
|
| 763 |
+
0.65,
|
| 764 |
+
0.884,
|
| 765 |
+
0.731
|
| 766 |
+
],
|
| 767 |
+
"angle": 0,
|
| 768 |
+
"content": "Before extracting key sentences, we first use GAT (Graph Attention Network) (Velicković et al., 2017) and heterogeneous GAT (Wang et al., 2020) to learn sentence representations on the homogeneous and heterogeneous graphs, respectively."
|
| 769 |
+
},
|
| 770 |
+
{
|
| 771 |
+
"type": "text",
|
| 772 |
+
"bbox": [
|
| 773 |
+
0.508,
|
| 774 |
+
0.731,
|
| 775 |
+
0.884,
|
| 776 |
+
0.779
|
| 777 |
+
],
|
| 778 |
+
"angle": 0,
|
| 779 |
+
"content": "In the homogeneous graph, we calculate the sentence representation \\( H_{s}^{(1)} \\) using the following formula:"
|
| 780 |
+
},
|
| 781 |
+
{
|
| 782 |
+
"type": "equation",
|
| 783 |
+
"bbox": [
|
| 784 |
+
0.556,
|
| 785 |
+
0.803,
|
| 786 |
+
0.884,
|
| 787 |
+
0.831
|
| 788 |
+
],
|
| 789 |
+
"angle": 0,
|
| 790 |
+
"content": "\\[\nU _ {s \\rightarrow s} ^ {\\prime (1)} = \\operatorname {G A T} _ {s 2 s} \\left(H _ {s} ^ {\\prime (0)}, H _ {s} ^ {\\prime (0)}, H _ {s} ^ {\\prime (0)}\\right)\n\\]"
|
| 791 |
+
},
|
| 792 |
+
{
|
| 793 |
+
"type": "equation",
|
| 794 |
+
"bbox": [
|
| 795 |
+
0.559,
|
| 796 |
+
0.826,
|
| 797 |
+
0.783,
|
| 798 |
+
0.846
|
| 799 |
+
],
|
| 800 |
+
"angle": 0,
|
| 801 |
+
"content": "\\[\nH _ {s} ^ {\\prime (1)} = \\mathrm {F N N} \\left(H _ {s} ^ {\\prime (0)} + U _ {s \\rightarrow s} ^ {\\prime (1)}\\right)\n\\]"
|
| 802 |
+
},
|
| 803 |
+
{
|
| 804 |
+
"type": "text",
|
| 805 |
+
"bbox": [
|
| 806 |
+
0.508,
|
| 807 |
+
0.856,
|
| 808 |
+
0.885,
|
| 809 |
+
0.919
|
| 810 |
+
],
|
| 811 |
+
"angle": 0,
|
| 812 |
+
"content": "In the heterogeneous graph, we learn the sentence representation \\( H_{s}^{(n)} \\) through \\( n \\) iterations, where the iteration process at step \\( t + 1 \\) is as follows:"
|
| 813 |
+
},
|
| 814 |
+
{
|
| 815 |
+
"type": "page_number",
|
| 816 |
+
"bbox": [
|
| 817 |
+
0.481,
|
| 818 |
+
0.928,
|
| 819 |
+
0.521,
|
| 820 |
+
0.941
|
| 821 |
+
],
|
| 822 |
+
"angle": 0,
|
| 823 |
+
"content": "9259"
|
| 824 |
+
}
|
| 825 |
+
],
|
| 826 |
+
[
|
| 827 |
+
{
|
| 828 |
+
"type": "equation",
|
| 829 |
+
"bbox": [
|
| 830 |
+
0.136,
|
| 831 |
+
0.104,
|
| 832 |
+
0.41,
|
| 833 |
+
0.124
|
| 834 |
+
],
|
| 835 |
+
"angle": 0,
|
| 836 |
+
"content": "\\[\nU _ {s \\rightarrow w} ^ {(t + 1)} = \\operatorname {G A T} _ {s 2 w} \\left(H _ {w} ^ {(t)}, H _ {s} ^ {(t)}, H _ {s} ^ {(t)}\\right)\n\\]"
|
| 837 |
+
},
|
| 838 |
+
{
|
| 839 |
+
"type": "equation",
|
| 840 |
+
"bbox": [
|
| 841 |
+
0.136,
|
| 842 |
+
0.126,
|
| 843 |
+
0.487,
|
| 844 |
+
0.154
|
| 845 |
+
],
|
| 846 |
+
"angle": 0,
|
| 847 |
+
"content": "\\[\nH _ {w} ^ {(t + 1)} = \\operatorname {F N N} \\left(H _ {w} ^ {(t)} + U _ {s \\rightarrow w} ^ {(t + 1)}\\right) \\tag {2}\n\\]"
|
| 848 |
+
},
|
| 849 |
+
{
|
| 850 |
+
"type": "equation",
|
| 851 |
+
"bbox": [
|
| 852 |
+
0.139,
|
| 853 |
+
0.143,
|
| 854 |
+
0.488,
|
| 855 |
+
0.168
|
| 856 |
+
],
|
| 857 |
+
"angle": 0,
|
| 858 |
+
"content": "\\[\nU _ {w \\rightarrow s} ^ {(t + 1)} = \\mathrm {G A T} _ {w 2 s} \\left(H _ {s} ^ {(t)}, H _ {w} ^ {(t + 1)}, H _ {w} ^ {(t + 1)}\\right) \\tag {2}\n\\]"
|
| 859 |
+
},
|
| 860 |
+
{
|
| 861 |
+
"type": "equation",
|
| 862 |
+
"bbox": [
|
| 863 |
+
0.139,
|
| 864 |
+
0.17,
|
| 865 |
+
0.36,
|
| 866 |
+
0.19
|
| 867 |
+
],
|
| 868 |
+
"angle": 0,
|
| 869 |
+
"content": "\\[\nH _ {s} ^ {(t + 1)} = \\mathrm {F N N} (H _ {s} ^ {(t)}, U _ {w \\rightarrow s} ^ {(t + 1)})\n\\]"
|
| 870 |
+
},
|
| 871 |
+
{
|
| 872 |
+
"type": "title",
|
| 873 |
+
"bbox": [
|
| 874 |
+
0.114,
|
| 875 |
+
0.196,
|
| 876 |
+
0.376,
|
| 877 |
+
0.212
|
| 878 |
+
],
|
| 879 |
+
"angle": 0,
|
| 880 |
+
"content": "4.2.2 Extracting Key Sentences"
|
| 881 |
+
},
|
| 882 |
+
{
|
| 883 |
+
"type": "text",
|
| 884 |
+
"bbox": [
|
| 885 |
+
0.113,
|
| 886 |
+
0.216,
|
| 887 |
+
0.49,
|
| 888 |
+
0.424
|
| 889 |
+
],
|
| 890 |
+
"angle": 0,
|
| 891 |
+
"content": "To extract key sentences, we first concatenate the sentence representations obtained from the homogeneous graph sentence representation learning, which capture inter-sentence relationships, with the sentence representations obtained from the heterogeneous graph sentence representation learning, which capture intra-sentence relationships. This results in the final sentence representation that incorporates both inter-sentence and intra-sentence relationships. Then, we use top- \\(K\\) selection as defined in Section 4.4 to extract the indices and scores of the top- \\(K\\) key sentences. The above process can be represented as follows:"
|
| 892 |
+
},
|
| 893 |
+
{
|
| 894 |
+
"type": "equation",
|
| 895 |
+
"bbox": [
|
| 896 |
+
0.188,
|
| 897 |
+
0.43,
|
| 898 |
+
0.488,
|
| 899 |
+
0.457
|
| 900 |
+
],
|
| 901 |
+
"angle": 0,
|
| 902 |
+
"content": "\\[\nH _ {s} ^ {*} = H _ {s} ^ {\\prime (1)} \\oplus H _ {s} ^ {(n)} \\tag {3}\n\\]"
|
| 903 |
+
},
|
| 904 |
+
{
|
| 905 |
+
"type": "equation",
|
| 906 |
+
"bbox": [
|
| 907 |
+
0.189,
|
| 908 |
+
0.452,
|
| 909 |
+
0.412,
|
| 910 |
+
0.47
|
| 911 |
+
],
|
| 912 |
+
"angle": 0,
|
| 913 |
+
"content": "\\[\ni n d i c e s, \\text {s c o r e} = \\operatorname {t o p} - K \\left(H _ {s} ^ {*}\\right)\n\\]"
|
| 914 |
+
},
|
| 915 |
+
{
|
| 916 |
+
"type": "text",
|
| 917 |
+
"bbox": [
|
| 918 |
+
0.114,
|
| 919 |
+
0.476,
|
| 920 |
+
0.49,
|
| 921 |
+
0.526
|
| 922 |
+
],
|
| 923 |
+
"angle": 0,
|
| 924 |
+
"content": "Finally, we use these indices to locate key sentences, and combine them to form the key snippet \\( X_{key} \\)."
|
| 925 |
+
},
|
| 926 |
+
{
|
| 927 |
+
"type": "title",
|
| 928 |
+
"bbox": [
|
| 929 |
+
0.114,
|
| 930 |
+
0.534,
|
| 931 |
+
0.245,
|
| 932 |
+
0.548
|
| 933 |
+
],
|
| 934 |
+
"angle": 0,
|
| 935 |
+
"content": "4.3 Generator"
|
| 936 |
+
},
|
| 937 |
+
{
|
| 938 |
+
"type": "text",
|
| 939 |
+
"bbox": [
|
| 940 |
+
0.113,
|
| 941 |
+
0.555,
|
| 942 |
+
0.49,
|
| 943 |
+
0.732
|
| 944 |
+
],
|
| 945 |
+
"angle": 0,
|
| 946 |
+
"content": "In the generator, we first input \\( x \\in X_{key} \\) into mBART and obtain \\( h_x^t \\), which is the model's output before the final language model head. Then, \\( h_x^t \\) is fed into the final language model head and a multilayer perceptron (MLP) to obtain a generation probability \\( P_{\\theta}(y_t|x,y_{< t}) \\) and a dynamic weight \\( P_{\\theta}(x|X_{key},y_{< t}) \\), respectively. Here, the dynamic weight represents the probability of selecting \\( x \\) from \\( X_{key} \\) for summary generation. Therefore, the generation probability of the final summary \\( y \\) is calculated by marginalizing (MARG) as follows:"
|
| 947 |
+
},
|
| 948 |
+
{
|
| 949 |
+
"type": "equation",
|
| 950 |
+
"bbox": [
|
| 951 |
+
0.127,
|
| 952 |
+
0.738,
|
| 953 |
+
0.487,
|
| 954 |
+
0.819
|
| 955 |
+
],
|
| 956 |
+
"angle": 0,
|
| 957 |
+
"content": "\\[\n\\begin{array}{l} P _ {\\theta} (y | x, X _ {k e y}) = \\prod_ {t = 1} ^ {T} \\sum_ {x \\in X _ {k e y}} \\left(P _ {\\theta} \\left(y _ {t} | x, y _ {< t}\\right) \\right. \\\\ P _ {\\theta} (x \\mid X _ {k e y}, y _ {< t})) \\tag {4} \\\\ \\end{array}\n\\]"
|
| 958 |
+
},
|
| 959 |
+
{
|
| 960 |
+
"type": "title",
|
| 961 |
+
"bbox": [
|
| 962 |
+
0.114,
|
| 963 |
+
0.829,
|
| 964 |
+
0.198,
|
| 965 |
+
0.843
|
| 966 |
+
],
|
| 967 |
+
"angle": 0,
|
| 968 |
+
"content": "4.4 Loss"
|
| 969 |
+
},
|
| 970 |
+
{
|
| 971 |
+
"type": "text",
|
| 972 |
+
"bbox": [
|
| 973 |
+
0.113,
|
| 974 |
+
0.849,
|
| 975 |
+
0.49,
|
| 976 |
+
0.896
|
| 977 |
+
],
|
| 978 |
+
"angle": 0,
|
| 979 |
+
"content": "Extractor Loss: For the extractor, we use cross-entropy to measure the loss of key sentence extraction:"
|
| 980 |
+
},
|
| 981 |
+
{
|
| 982 |
+
"type": "equation",
|
| 983 |
+
"bbox": [
|
| 984 |
+
0.135,
|
| 985 |
+
0.905,
|
| 986 |
+
0.488,
|
| 987 |
+
0.922
|
| 988 |
+
],
|
| 989 |
+
"angle": 0,
|
| 990 |
+
"content": "\\[\n\\mathcal {L} _ {e x t} = - (z \\log (\\hat {z}) + (1 - z) \\log (1 - \\hat {z})) \\tag {5}\n\\]"
|
| 991 |
+
},
|
| 992 |
+
{
|
| 993 |
+
"type": "text",
|
| 994 |
+
"bbox": [
|
| 995 |
+
0.508,
|
| 996 |
+
0.085,
|
| 997 |
+
0.885,
|
| 998 |
+
0.229
|
| 999 |
+
],
|
| 1000 |
+
"angle": 0,
|
| 1001 |
+
"content": "where \\(\\hat{z}\\) is the predicted result, which can be computed using the indices from Eq.(3), and \\(z\\) is the true label. The calculation process is as follows: First, we use mBERT to represent all the sentences in the source document clusters and the target summary. Then, we calculate the cosine similarity between these sentences and the target summary, and label the top-\\(K\\) sentences with the highest similarity as key sentences."
|
| 1002 |
+
},
|
| 1003 |
+
{
|
| 1004 |
+
"type": "text",
|
| 1005 |
+
"bbox": [
|
| 1006 |
+
0.508,
|
| 1007 |
+
0.23,
|
| 1008 |
+
0.884,
|
| 1009 |
+
0.275
|
| 1010 |
+
],
|
| 1011 |
+
"angle": 0,
|
| 1012 |
+
"content": "Generator Loss: For the generator, we use the negative log-likelihood loss (NLL) to measure the loss:"
|
| 1013 |
+
},
|
| 1014 |
+
{
|
| 1015 |
+
"type": "equation",
|
| 1016 |
+
"bbox": [
|
| 1017 |
+
0.591,
|
| 1018 |
+
0.278,
|
| 1019 |
+
0.884,
|
| 1020 |
+
0.296
|
| 1021 |
+
],
|
| 1022 |
+
"angle": 0,
|
| 1023 |
+
"content": "\\[\n\\mathcal {L} _ {\\text {g e n}} = - \\log P _ {\\theta} (y | x, X _ {\\text {k e y}}) \\tag {6}\n\\]"
|
| 1024 |
+
},
|
| 1025 |
+
{
|
| 1026 |
+
"type": "text",
|
| 1027 |
+
"bbox": [
|
| 1028 |
+
0.508,
|
| 1029 |
+
0.301,
|
| 1030 |
+
0.885,
|
| 1031 |
+
0.444
|
| 1032 |
+
],
|
| 1033 |
+
"angle": 0,
|
| 1034 |
+
"content": "Consistency Loss: The dynamic weight \\( P_{\\theta}(x|X_{key},y_{< t}) \\) of generator represents the probability of selecting \\( x \\) from \\( X_{key} \\) at the \\( t \\)-th time step, essentially serving the same function as the extractor. Therefore, we adopt a KL divergence-based Consistency Loss proposed by Mao et al. (2021) to quantify the difference between the average dynamic weight and the extractor's predicted scores:"
|
| 1035 |
+
},
|
| 1036 |
+
{
|
| 1037 |
+
"type": "equation",
|
| 1038 |
+
"bbox": [
|
| 1039 |
+
0.554,
|
| 1040 |
+
0.453,
|
| 1041 |
+
0.883,
|
| 1042 |
+
0.496
|
| 1043 |
+
],
|
| 1044 |
+
"angle": 0,
|
| 1045 |
+
"content": "\\[\n\\mathcal {L} _ {c o n} = \\mathrm {K L} \\left(\\frac {1}{T} \\sum_ {t = 1} ^ {T} \\left(P _ {\\theta} \\left(x \\mid X _ {k e y}, y _ {< t}\\right)\\right), \\right. \\tag {7}\n\\]"
|
| 1046 |
+
},
|
| 1047 |
+
{
|
| 1048 |
+
"type": "equation",
|
| 1049 |
+
"bbox": [
|
| 1050 |
+
0.707,
|
| 1051 |
+
0.499,
|
| 1052 |
+
0.838,
|
| 1053 |
+
0.516
|
| 1054 |
+
],
|
| 1055 |
+
"angle": 0,
|
| 1056 |
+
"content": "\\[\n\\operatorname {S o f t m a x} (s c o r e))\n\\]"
|
| 1057 |
+
},
|
| 1058 |
+
{
|
| 1059 |
+
"type": "text",
|
| 1060 |
+
"bbox": [
|
| 1061 |
+
0.508,
|
| 1062 |
+
0.526,
|
| 1063 |
+
0.884,
|
| 1064 |
+
0.556
|
| 1065 |
+
],
|
| 1066 |
+
"angle": 0,
|
| 1067 |
+
"content": "Total Loss: The overall model loss can be defined as follows:"
|
| 1068 |
+
},
|
| 1069 |
+
{
|
| 1070 |
+
"type": "equation",
|
| 1071 |
+
"bbox": [
|
| 1072 |
+
0.526,
|
| 1073 |
+
0.569,
|
| 1074 |
+
0.884,
|
| 1075 |
+
0.587
|
| 1076 |
+
],
|
| 1077 |
+
"angle": 0,
|
| 1078 |
+
"content": "\\[\n\\mathcal {L} _ {\\text {t o t a l}} = \\lambda_ {\\text {e x t}} \\mathcal {L} _ {\\text {e x t}} + \\lambda_ {\\text {g e n}} \\mathcal {L} _ {\\text {g e n}} + \\lambda_ {\\text {c o n}} \\mathcal {L} _ {\\text {c o n}} \\tag {8}\n\\]"
|
| 1079 |
+
},
|
| 1080 |
+
{
|
| 1081 |
+
"type": "text",
|
| 1082 |
+
"bbox": [
|
| 1083 |
+
0.508,
|
| 1084 |
+
0.598,
|
| 1085 |
+
0.869,
|
| 1086 |
+
0.614
|
| 1087 |
+
],
|
| 1088 |
+
"angle": 0,
|
| 1089 |
+
"content": "where \\(\\lambda_{ext}\\), \\(\\lambda_{gen}\\), and \\(\\lambda_{con}\\) are hyperparameters."
|
| 1090 |
+
},
|
| 1091 |
+
{
|
| 1092 |
+
"type": "title",
|
| 1093 |
+
"bbox": [
|
| 1094 |
+
0.509,
|
| 1095 |
+
0.624,
|
| 1096 |
+
0.656,
|
| 1097 |
+
0.641
|
| 1098 |
+
],
|
| 1099 |
+
"angle": 0,
|
| 1100 |
+
"content": "5 Experiments"
|
| 1101 |
+
},
|
| 1102 |
+
{
|
| 1103 |
+
"type": "text",
|
| 1104 |
+
"bbox": [
|
| 1105 |
+
0.508,
|
| 1106 |
+
0.65,
|
| 1107 |
+
0.882,
|
| 1108 |
+
0.681
|
| 1109 |
+
],
|
| 1110 |
+
"angle": 0,
|
| 1111 |
+
"content": "In this section, we will introduce the baselines we used and present the implementation details."
|
| 1112 |
+
},
|
| 1113 |
+
{
|
| 1114 |
+
"type": "title",
|
| 1115 |
+
"bbox": [
|
| 1116 |
+
0.509,
|
| 1117 |
+
0.692,
|
| 1118 |
+
0.63,
|
| 1119 |
+
0.706
|
| 1120 |
+
],
|
| 1121 |
+
"angle": 0,
|
| 1122 |
+
"content": "5.1 Baselines"
|
| 1123 |
+
},
|
| 1124 |
+
{
|
| 1125 |
+
"type": "text",
|
| 1126 |
+
"bbox": [
|
| 1127 |
+
0.508,
|
| 1128 |
+
0.712,
|
| 1129 |
+
0.885,
|
| 1130 |
+
0.808
|
| 1131 |
+
],
|
| 1132 |
+
"angle": 0,
|
| 1133 |
+
"content": "To benchmark the MLMD-news dataset, in addition to our proposed graph-based extractive-generative method, we also used the following baselines, which can be categorized into Extract-then-translate, Translate-then-extract, Abstractive models, LLM, and Extract-then-abstract."
|
| 1134 |
+
},
|
| 1135 |
+
{
|
| 1136 |
+
"type": "text",
|
| 1137 |
+
"bbox": [
|
| 1138 |
+
0.508,
|
| 1139 |
+
0.81,
|
| 1140 |
+
0.885,
|
| 1141 |
+
0.922
|
| 1142 |
+
],
|
| 1143 |
+
"angle": 0,
|
| 1144 |
+
"content": "Extract-then-translate: First, summaries are extracted from the source document cluster using classic extractive models such as Centroid (Radev et al., 2004), LexRank (Erkan and Radev, 2004), MMR (Carbonell and Goldstein, 1998), and TextRank (Mihalcea and Tarau, 2004), and then translated into the target language."
|
| 1145 |
+
},
|
| 1146 |
+
{
|
| 1147 |
+
"type": "page_number",
|
| 1148 |
+
"bbox": [
|
| 1149 |
+
0.481,
|
| 1150 |
+
0.928,
|
| 1151 |
+
0.522,
|
| 1152 |
+
0.941
|
| 1153 |
+
],
|
| 1154 |
+
"angle": 0,
|
| 1155 |
+
"content": "9260"
|
| 1156 |
+
}
|
| 1157 |
+
],
|
| 1158 |
+
[
|
| 1159 |
+
{
|
| 1160 |
+
"type": "text",
|
| 1161 |
+
"bbox": [
|
| 1162 |
+
0.114,
|
| 1163 |
+
0.085,
|
| 1164 |
+
0.488,
|
| 1165 |
+
0.164
|
| 1166 |
+
],
|
| 1167 |
+
"angle": 0,
|
| 1168 |
+
"content": "Translate-then-extract: First, the documents in the source document cluster are translated into the target language, and then summaries are extracted using classic extractive models such as Centroid, LexRank, MMR, and TextRank."
|
| 1169 |
+
},
|
| 1170 |
+
{
|
| 1171 |
+
"type": "text",
|
| 1172 |
+
"bbox": [
|
| 1173 |
+
0.114,
|
| 1174 |
+
0.166,
|
| 1175 |
+
0.488,
|
| 1176 |
+
0.245
|
| 1177 |
+
],
|
| 1178 |
+
"angle": 0,
|
| 1179 |
+
"content": "Translate-then-MDS: First, the documents in the source document cluster are translated into the target language, and then summaries are generated using SLMD models such as PRIMERA(Xiao et al., 2022), PEGASUS(Zhang et al., 2019)."
|
| 1180 |
+
},
|
| 1181 |
+
{
|
| 1182 |
+
"type": "text",
|
| 1183 |
+
"bbox": [
|
| 1184 |
+
0.114,
|
| 1185 |
+
0.247,
|
| 1186 |
+
0.488,
|
| 1187 |
+
0.342
|
| 1188 |
+
],
|
| 1189 |
+
"angle": 0,
|
| 1190 |
+
"content": "Abstractive models: Use mT5 (Xue, 2020) and mBART (Tang et al., 2020), which have multi-language understanding and generation capabilities, to directly generate summaries from the source document cluster. If the input exceeds the model's capacity, the excess parts will be truncated."
|
| 1191 |
+
},
|
| 1192 |
+
{
|
| 1193 |
+
"type": "text",
|
| 1194 |
+
"bbox": [
|
| 1195 |
+
0.114,
|
| 1196 |
+
0.343,
|
| 1197 |
+
0.488,
|
| 1198 |
+
0.438
|
| 1199 |
+
],
|
| 1200 |
+
"angle": 0,
|
| 1201 |
+
"content": "LLM: Use models such as GPT-3.5-turbo-16k³, GPT-4.0-turbo-32k³, Llama-3.1-8B-Instruct(Dubey et al., 2024), Gemini-1.5-pro⁴, and Claude-2.1⁵, which have multi-language and long input capabilities, to directly generate summaries from the source document cluster."
|
| 1202 |
+
},
|
| 1203 |
+
{
|
| 1204 |
+
"type": "text",
|
| 1205 |
+
"bbox": [
|
| 1206 |
+
0.114,
|
| 1207 |
+
0.44,
|
| 1208 |
+
0.488,
|
| 1209 |
+
0.519
|
| 1210 |
+
],
|
| 1211 |
+
"angle": 0,
|
| 1212 |
+
"content": "Extract-then-abstract: First, use classic extractive models such as Centroid and TextRank to extract summaries from the source document cluster, and then generate the target summary using generative models like mT5 and mBART."
|
| 1213 |
+
},
|
| 1214 |
+
{
|
| 1215 |
+
"type": "title",
|
| 1216 |
+
"bbox": [
|
| 1217 |
+
0.114,
|
| 1218 |
+
0.533,
|
| 1219 |
+
0.345,
|
| 1220 |
+
0.547
|
| 1221 |
+
],
|
| 1222 |
+
"angle": 0,
|
| 1223 |
+
"content": "5.2 Implementation Details"
|
| 1224 |
+
},
|
| 1225 |
+
{
|
| 1226 |
+
"type": "text",
|
| 1227 |
+
"bbox": [
|
| 1228 |
+
0.113,
|
| 1229 |
+
0.553,
|
| 1230 |
+
0.488,
|
| 1231 |
+
0.777
|
| 1232 |
+
],
|
| 1233 |
+
"angle": 0,
|
| 1234 |
+
"content": "In constructing the MLMD-news, the ROUGE-1 thresholds for French, German, and Spanish were set to 88.03, 87.05, and 89.25, respectively, based on the average ROUGE-1 scores for various language news. For the Graph-based Extract-Generate model, we set \\(\\lambda_{\\mathrm{ext}} = 1\\), \\(\\lambda_{\\mathrm{gen}} = 0.1\\), and \\(\\lambda_{\\mathrm{con}} = 0.0001\\). The extractor's learning rate was set to \\(5 \\times 10^{-6}\\), while the generator's learning rate was \\(5 \\times 10^{-5}\\). The batch size was 8, and top-\\(K\\) was set to 10. The ROUGE is calculated by pyrouge<sup>6</sup>. All experiments were conducted on NVIDIA L20 GPUs. In addition, the total number of parameters in Graph-based Extract-Generate model is about 800M."
|
| 1235 |
+
},
|
| 1236 |
+
{
|
| 1237 |
+
"type": "title",
|
| 1238 |
+
"bbox": [
|
| 1239 |
+
0.114,
|
| 1240 |
+
0.79,
|
| 1241 |
+
0.357,
|
| 1242 |
+
0.805
|
| 1243 |
+
],
|
| 1244 |
+
"angle": 0,
|
| 1245 |
+
"content": "5.3 Benchmark Experiments"
|
| 1246 |
+
},
|
| 1247 |
+
{
|
| 1248 |
+
"type": "text",
|
| 1249 |
+
"bbox": [
|
| 1250 |
+
0.114,
|
| 1251 |
+
0.811,
|
| 1252 |
+
0.488,
|
| 1253 |
+
0.859
|
| 1254 |
+
],
|
| 1255 |
+
"angle": 0,
|
| 1256 |
+
"content": "In Table 2, we present the ROUGE scores for different methods on the MLMD-news dataset, and the following observations can be made:"
|
| 1257 |
+
},
|
| 1258 |
+
{
|
| 1259 |
+
"type": "table",
|
| 1260 |
+
"bbox": [
|
| 1261 |
+
0.518,
|
| 1262 |
+
0.082,
|
| 1263 |
+
0.876,
|
| 1264 |
+
0.501
|
| 1265 |
+
],
|
| 1266 |
+
"angle": 0,
|
| 1267 |
+
"content": "<table><tr><td></td><td>R-1</td><td>R-2</td><td>R-L</td></tr><tr><td colspan=\"4\">Extract-then-translate</td></tr><tr><td>Centroid</td><td>27.90</td><td>6.92</td><td>23.35</td></tr><tr><td>LexRank</td><td>28.61</td><td>7.30</td><td>24.27</td></tr><tr><td>MMR</td><td>24.07</td><td>5.61</td><td>20.23</td></tr><tr><td>TextRank</td><td>28.66</td><td>7.28</td><td>24.13</td></tr><tr><td colspan=\"4\">Translate-then-extract</td></tr><tr><td>Centroid</td><td>29.16</td><td>7.64</td><td>23.60</td></tr><tr><td>LexRank</td><td>31.12</td><td>8.53</td><td>25.70</td></tr><tr><td>MMR</td><td>25.58</td><td>6.11</td><td>20.93</td></tr><tr><td>TextRank</td><td>30.18</td><td>8.04</td><td>24.55</td></tr><tr><td colspan=\"4\">Translate-then-MDS</td></tr><tr><td>PEGASUS</td><td>35.54</td><td>9.44</td><td>29.95</td></tr><tr><td>PRIMERA</td><td>36.83</td><td>10.79</td><td>32.12</td></tr><tr><td colspan=\"4\">Abstractive models</td></tr><tr><td>mBART(1024)</td><td>36.84</td><td>8.13</td><td>32.22</td></tr><tr><td>mT5(1024)</td><td>33.21</td><td>6.26</td><td>27.43</td></tr><tr><td colspan=\"4\">LLM</td></tr><tr><td>GPT-3.5-turbo-16k</td><td>34.36</td><td>8.88</td><td>30.74</td></tr><tr><td>GPT-4.0-turbo-32k</td><td>39.02</td><td>10.45</td><td>34.68</td></tr><tr><td>Llama-3.1-8B-Instruct</td><td>36.61</td><td>10.44</td><td>33.31</td></tr><tr><td>Gemini-1.5-pro</td><td>40.79</td><td>12.05</td><td>36.59</td></tr><tr><td>Claude-2.1</td><td>40.51</td><td>11.67</td><td>36.16</td></tr><tr><td colspan=\"4\">Extract-then-abstarct</td></tr><tr><td>TextRank-then-mBART</td><td>32.00</td><td>5.84</td><td>28.00</td></tr><tr><td>Centroid-then-mBART</td><td>32.76</td><td>5.70</td><td>28.71</td></tr><tr><td>TextRank-then-mT5</td><td>31.63</td><td>5.22</td><td>26.46</td></tr><tr><td>Centroid-then-mT5</td><td>31.39</td><td>5.21</td><td>26.25</td></tr><tr><td>Our</td><td>39.16</td><td>9.64</td><td>34.02</td></tr></table>"
|
| 1268 |
+
},
|
| 1269 |
+
{
|
| 1270 |
+
"type": "table_caption",
|
| 1271 |
+
"bbox": [
|
| 1272 |
+
0.509,
|
| 1273 |
+
0.51,
|
| 1274 |
+
0.882,
|
| 1275 |
+
0.538
|
| 1276 |
+
],
|
| 1277 |
+
"angle": 0,
|
| 1278 |
+
"content": "Table 2: The benchmark experimental results on the MLMD-news dataset."
|
| 1279 |
+
},
|
| 1280 |
+
{
|
| 1281 |
+
"type": "text",
|
| 1282 |
+
"bbox": [
|
| 1283 |
+
0.532,
|
| 1284 |
+
0.565,
|
| 1285 |
+
0.882,
|
| 1286 |
+
0.644
|
| 1287 |
+
],
|
| 1288 |
+
"angle": 0,
|
| 1289 |
+
"content": "- The ROUGE scores for the Extract-then-translate methods are quite low, which can be attributed to the limited support of classic extractive methods for mixed languages and the translation of extracted sentences."
|
| 1290 |
+
},
|
| 1291 |
+
{
|
| 1292 |
+
"type": "text",
|
| 1293 |
+
"bbox": [
|
| 1294 |
+
0.532,
|
| 1295 |
+
0.657,
|
| 1296 |
+
0.882,
|
| 1297 |
+
0.737
|
| 1298 |
+
],
|
| 1299 |
+
"angle": 0,
|
| 1300 |
+
"content": "- The ROUGE scores for the Translate-then-extract methods are higher than those for the Extract-then-translate methods, possibly because these classic extractive methods perform better with single-language input."
|
| 1301 |
+
},
|
| 1302 |
+
{
|
| 1303 |
+
"type": "text",
|
| 1304 |
+
"bbox": [
|
| 1305 |
+
0.532,
|
| 1306 |
+
0.749,
|
| 1307 |
+
0.882,
|
| 1308 |
+
0.859
|
| 1309 |
+
],
|
| 1310 |
+
"angle": 0,
|
| 1311 |
+
"content": "- TThe ROUGE score of the Translate-then-MDS method is significantly higher than that of the Translate-then-extract methods. This result is evident because the MDS method has been improved for multi-document processing, while traditional extraction methods have not."
|
| 1312 |
+
},
|
| 1313 |
+
{
|
| 1314 |
+
"type": "text",
|
| 1315 |
+
"bbox": [
|
| 1316 |
+
0.532,
|
| 1317 |
+
0.874,
|
| 1318 |
+
0.882,
|
| 1319 |
+
0.919
|
| 1320 |
+
],
|
| 1321 |
+
"angle": 0,
|
| 1322 |
+
"content": "- Abstractive models show a significant advantage in ROUGE scores compared to the Extract-then-translate and Translate-then"
|
| 1323 |
+
},
|
| 1324 |
+
{
|
| 1325 |
+
"type": "list",
|
| 1326 |
+
"bbox": [
|
| 1327 |
+
0.532,
|
| 1328 |
+
0.565,
|
| 1329 |
+
0.882,
|
| 1330 |
+
0.919
|
| 1331 |
+
],
|
| 1332 |
+
"angle": 0,
|
| 1333 |
+
"content": null
|
| 1334 |
+
},
|
| 1335 |
+
{
|
| 1336 |
+
"type": "page_footnote",
|
| 1337 |
+
"bbox": [
|
| 1338 |
+
0.137,
|
| 1339 |
+
0.868,
|
| 1340 |
+
0.293,
|
| 1341 |
+
0.881
|
| 1342 |
+
],
|
| 1343 |
+
"angle": 0,
|
| 1344 |
+
"content": "3https://openai.com/."
|
| 1345 |
+
},
|
| 1346 |
+
{
|
| 1347 |
+
"type": "page_footnote",
|
| 1348 |
+
"bbox": [
|
| 1349 |
+
0.137,
|
| 1350 |
+
0.882,
|
| 1351 |
+
0.342,
|
| 1352 |
+
0.895
|
| 1353 |
+
],
|
| 1354 |
+
"angle": 0,
|
| 1355 |
+
"content": "<sup>4</sup>https://gemini.google.com/"
|
| 1356 |
+
},
|
| 1357 |
+
{
|
| 1358 |
+
"type": "page_footnote",
|
| 1359 |
+
"bbox": [
|
| 1360 |
+
0.137,
|
| 1361 |
+
0.895,
|
| 1362 |
+
0.284,
|
| 1363 |
+
0.907
|
| 1364 |
+
],
|
| 1365 |
+
"angle": 0,
|
| 1366 |
+
"content": "<sup>5</sup>https://claude.ai/"
|
| 1367 |
+
},
|
| 1368 |
+
{
|
| 1369 |
+
"type": "page_footnote",
|
| 1370 |
+
"bbox": [
|
| 1371 |
+
0.137,
|
| 1372 |
+
0.907,
|
| 1373 |
+
0.411,
|
| 1374 |
+
0.921
|
| 1375 |
+
],
|
| 1376 |
+
"angle": 0,
|
| 1377 |
+
"content": "<sup>6</sup>https://github.com/andersjo/pyrouge"
|
| 1378 |
+
},
|
| 1379 |
+
{
|
| 1380 |
+
"type": "list",
|
| 1381 |
+
"bbox": [
|
| 1382 |
+
0.137,
|
| 1383 |
+
0.868,
|
| 1384 |
+
0.411,
|
| 1385 |
+
0.921
|
| 1386 |
+
],
|
| 1387 |
+
"angle": 0,
|
| 1388 |
+
"content": null
|
| 1389 |
+
},
|
| 1390 |
+
{
|
| 1391 |
+
"type": "page_number",
|
| 1392 |
+
"bbox": [
|
| 1393 |
+
0.481,
|
| 1394 |
+
0.929,
|
| 1395 |
+
0.518,
|
| 1396 |
+
0.941
|
| 1397 |
+
],
|
| 1398 |
+
"angle": 0,
|
| 1399 |
+
"content": "9261"
|
| 1400 |
+
}
|
| 1401 |
+
],
|
| 1402 |
+
[
|
| 1403 |
+
{
|
| 1404 |
+
"type": "image",
|
| 1405 |
+
"bbox": [
|
| 1406 |
+
0.197,
|
| 1407 |
+
0.083,
|
| 1408 |
+
0.805,
|
| 1409 |
+
0.187
|
| 1410 |
+
],
|
| 1411 |
+
"angle": 0,
|
| 1412 |
+
"content": null
|
| 1413 |
+
},
|
| 1414 |
+
{
|
| 1415 |
+
"type": "image_caption",
|
| 1416 |
+
"bbox": [
|
| 1417 |
+
0.283,
|
| 1418 |
+
0.196,
|
| 1419 |
+
0.713,
|
| 1420 |
+
0.21
|
| 1421 |
+
],
|
| 1422 |
+
"angle": 0,
|
| 1423 |
+
"content": "Figure 6: Parameter Sensitivity of top- \\( K \\) on the ROUGE score."
|
| 1424 |
+
},
|
| 1425 |
+
{
|
| 1426 |
+
"type": "text",
|
| 1427 |
+
"bbox": [
|
| 1428 |
+
0.15,
|
| 1429 |
+
0.236,
|
| 1430 |
+
0.49,
|
| 1431 |
+
0.363
|
| 1432 |
+
],
|
| 1433 |
+
"angle": 0,
|
| 1434 |
+
"content": "extract methods, possibly because they possess strong multi-language understanding capabilities. Additionally, the performance of Abstractive models is similar to Translatethen-MDS, indicating that having multidocument processing capability and multilingual understanding are equally important in MLMD."
|
| 1435 |
+
},
|
| 1436 |
+
{
|
| 1437 |
+
"type": "text",
|
| 1438 |
+
"bbox": [
|
| 1439 |
+
0.137,
|
| 1440 |
+
0.375,
|
| 1441 |
+
0.489,
|
| 1442 |
+
0.454
|
| 1443 |
+
],
|
| 1444 |
+
"angle": 0,
|
| 1445 |
+
"content": "- The best and second-best results are achieved by LLMs, mainly due to their strong multi-language understanding and generation capabilities, as well as their ability to accept very long input documents."
|
| 1446 |
+
},
|
| 1447 |
+
{
|
| 1448 |
+
"type": "text",
|
| 1449 |
+
"bbox": [
|
| 1450 |
+
0.137,
|
| 1451 |
+
0.465,
|
| 1452 |
+
0.49,
|
| 1453 |
+
0.592
|
| 1454 |
+
],
|
| 1455 |
+
"angle": 0,
|
| 1456 |
+
"content": "- Aside from our proposed method, other Extract-then-abstract methods have lower ROUGE scores compared to Abstractive models. This suggests that inappropriate extraction may not only fail to enhance summarization performance but could also lead to poorer final results due to the loss of important information."
|
| 1457 |
+
},
|
| 1458 |
+
{
|
| 1459 |
+
"type": "text",
|
| 1460 |
+
"bbox": [
|
| 1461 |
+
0.137,
|
| 1462 |
+
0.603,
|
| 1463 |
+
0.489,
|
| 1464 |
+
0.682
|
| 1465 |
+
],
|
| 1466 |
+
"angle": 0,
|
| 1467 |
+
"content": "- The results indicate that our method addresses the above issues of other Extract-then-abstract methods and achieves performance close to that of LLMs, demonstrating the effectiveness of our method."
|
| 1468 |
+
},
|
| 1469 |
+
{
|
| 1470 |
+
"type": "list",
|
| 1471 |
+
"bbox": [
|
| 1472 |
+
0.137,
|
| 1473 |
+
0.375,
|
| 1474 |
+
0.49,
|
| 1475 |
+
0.682
|
| 1476 |
+
],
|
| 1477 |
+
"angle": 0,
|
| 1478 |
+
"content": null
|
| 1479 |
+
},
|
| 1480 |
+
{
|
| 1481 |
+
"type": "title",
|
| 1482 |
+
"bbox": [
|
| 1483 |
+
0.114,
|
| 1484 |
+
0.692,
|
| 1485 |
+
0.281,
|
| 1486 |
+
0.707
|
| 1487 |
+
],
|
| 1488 |
+
"angle": 0,
|
| 1489 |
+
"content": "5.4 Ablation Study"
|
| 1490 |
+
},
|
| 1491 |
+
{
|
| 1492 |
+
"type": "text",
|
| 1493 |
+
"bbox": [
|
| 1494 |
+
0.113,
|
| 1495 |
+
0.713,
|
| 1496 |
+
0.49,
|
| 1497 |
+
0.921
|
| 1498 |
+
],
|
| 1499 |
+
"angle": 0,
|
| 1500 |
+
"content": "In Table 3, we present the impact of different modules of our model on the MLMD-news dataset, including the extractor module, generator module, and consistency loss. When the extractor module is removed (i.e.w/o extractor), our method degenerates to mBART, resulting in decreases of 2.32 points, 1.51 points, and 1.8 points in ROUGE-1, ROUGE-2, and ROUGE-L, respectively. This indicates that extracting key sentences significantly impacts the overall quality of the summary. When the generator module is removed (i.e.w/o generator), the extracted sentences are multi-language, and using machine translation to convert them into"
|
| 1501 |
+
},
|
| 1502 |
+
{
|
| 1503 |
+
"type": "table",
|
| 1504 |
+
"bbox": [
|
| 1505 |
+
0.548,
|
| 1506 |
+
0.232,
|
| 1507 |
+
0.848,
|
| 1508 |
+
0.31
|
| 1509 |
+
],
|
| 1510 |
+
"angle": 0,
|
| 1511 |
+
"content": "<table><tr><td></td><td>R-1</td><td>R-2</td><td>R-L</td></tr><tr><td>Our</td><td>39.16</td><td>9.64</td><td>34.02</td></tr><tr><td>w/o extractor</td><td>36.84</td><td>8.13</td><td>32.22</td></tr><tr><td>w/o generator</td><td>34.17</td><td>8.45</td><td>31.04</td></tr><tr><td>w/o consistency</td><td>38.81</td><td>9.41</td><td>33.85</td></tr></table>"
|
| 1512 |
+
},
|
| 1513 |
+
{
|
| 1514 |
+
"type": "table_caption",
|
| 1515 |
+
"bbox": [
|
| 1516 |
+
0.611,
|
| 1517 |
+
0.318,
|
| 1518 |
+
0.78,
|
| 1519 |
+
0.333
|
| 1520 |
+
],
|
| 1521 |
+
"angle": 0,
|
| 1522 |
+
"content": "Table 3: Ablation Study."
|
| 1523 |
+
},
|
| 1524 |
+
{
|
| 1525 |
+
"type": "text",
|
| 1526 |
+
"bbox": [
|
| 1527 |
+
0.508,
|
| 1528 |
+
0.36,
|
| 1529 |
+
0.886,
|
| 1530 |
+
0.522
|
| 1531 |
+
],
|
| 1532 |
+
"angle": 0,
|
| 1533 |
+
"content": "a summary, resulting in decreases of 4.99 points, 1.19 points, and 2.98 points in ROUGE-1, ROUGE-2, and ROUGE-L, respectively. This indicates that the generator plays a crucial role in the overall quality of the summary. Finally, when the consistency loss module is removed (i.e.w/o consistency), ROUGE-1, ROUGE-2, and ROUGE-L decrease by 0.35 points, 0.23 points, and 0.17 points, respectively. This suggests that consistency loss helps optimize the extraction quality of the extractor."
|
| 1534 |
+
},
|
| 1535 |
+
{
|
| 1536 |
+
"type": "title",
|
| 1537 |
+
"bbox": [
|
| 1538 |
+
0.509,
|
| 1539 |
+
0.535,
|
| 1540 |
+
0.724,
|
| 1541 |
+
0.551
|
| 1542 |
+
],
|
| 1543 |
+
"angle": 0,
|
| 1544 |
+
"content": "5.5 Parameter sensitivity"
|
| 1545 |
+
},
|
| 1546 |
+
{
|
| 1547 |
+
"type": "text",
|
| 1548 |
+
"bbox": [
|
| 1549 |
+
0.508,
|
| 1550 |
+
0.557,
|
| 1551 |
+
0.886,
|
| 1552 |
+
0.703
|
| 1553 |
+
],
|
| 1554 |
+
"angle": 0,
|
| 1555 |
+
"content": "We also explored the impact of extracting different numbers of key sentences (i.e., different \\( K \\) of top- \\( K \\)) on model performance in the MLMD-news dataset. As shown in Figure 6, the ROUGE score increases with the increase in \\( K \\), reaching a peak at \\( K = 10 \\). However, when \\( K \\) exceeds 10, the ROUGE score begins to decline, possibly due to the introduction of noise information from including too many sentences."
|
| 1556 |
+
},
|
| 1557 |
+
{
|
| 1558 |
+
"type": "title",
|
| 1559 |
+
"bbox": [
|
| 1560 |
+
0.509,
|
| 1561 |
+
0.717,
|
| 1562 |
+
0.642,
|
| 1563 |
+
0.733
|
| 1564 |
+
],
|
| 1565 |
+
"angle": 0,
|
| 1566 |
+
"content": "6 Conclusion"
|
| 1567 |
+
},
|
| 1568 |
+
{
|
| 1569 |
+
"type": "text",
|
| 1570 |
+
"bbox": [
|
| 1571 |
+
0.508,
|
| 1572 |
+
0.745,
|
| 1573 |
+
0.886,
|
| 1574 |
+
0.921
|
| 1575 |
+
],
|
| 1576 |
+
"angle": 0,
|
| 1577 |
+
"content": "In this paper, we constructed the first mixed-language multi-document news summarization dataset (MLMD-news) and proposed a graph-based extract-generate model specifically designed for the MLMD news summarization task. We conducted benchmark tests on the MLMD-news dataset, evaluating our proposed method along with advanced methods such as LLM. Additionally, we have publicly released the dataset and code, hoping to foster further development in the MLMD news summarization area."
|
| 1578 |
+
},
|
| 1579 |
+
{
|
| 1580 |
+
"type": "page_number",
|
| 1581 |
+
"bbox": [
|
| 1582 |
+
0.481,
|
| 1583 |
+
0.929,
|
| 1584 |
+
0.522,
|
| 1585 |
+
0.941
|
| 1586 |
+
],
|
| 1587 |
+
"angle": 0,
|
| 1588 |
+
"content": "9262"
|
| 1589 |
+
}
|
| 1590 |
+
],
|
| 1591 |
+
[
|
| 1592 |
+
{
|
| 1593 |
+
"type": "title",
|
| 1594 |
+
"bbox": [
|
| 1595 |
+
0.115,
|
| 1596 |
+
0.085,
|
| 1597 |
+
0.221,
|
| 1598 |
+
0.1
|
| 1599 |
+
],
|
| 1600 |
+
"angle": 0,
|
| 1601 |
+
"content": "Limitations"
|
| 1602 |
+
},
|
| 1603 |
+
{
|
| 1604 |
+
"type": "text",
|
| 1605 |
+
"bbox": [
|
| 1606 |
+
0.113,
|
| 1607 |
+
0.11,
|
| 1608 |
+
0.493,
|
| 1609 |
+
0.352
|
| 1610 |
+
],
|
| 1611 |
+
"angle": 0,
|
| 1612 |
+
"content": "Although our method demonstrates significant performance advantages in the mixed-language multi-document summarization task, due to the limitations of GPU performance, we set the maximum number of sentence extractions (top- \\( K \\)) in our experiments to 13. Increasing the top- \\( K \\) value further might improve the model's optimal performance, but this hypothesis has yet to be validated. Moreover, the mixed-language dataset we constructed currently primarily includes rich-resource languages such as German and English, with a limited number of languages involved. Future work could extend to more languages, especially low-resource ones, to further verify the method's applicability and generalization capability."
|
| 1613 |
+
},
|
| 1614 |
+
{
|
| 1615 |
+
"type": "title",
|
| 1616 |
+
"bbox": [
|
| 1617 |
+
0.115,
|
| 1618 |
+
0.364,
|
| 1619 |
+
0.317,
|
| 1620 |
+
0.379
|
| 1621 |
+
],
|
| 1622 |
+
"angle": 0,
|
| 1623 |
+
"content": "Ethical Considerations"
|
| 1624 |
+
},
|
| 1625 |
+
{
|
| 1626 |
+
"type": "text",
|
| 1627 |
+
"bbox": [
|
| 1628 |
+
0.113,
|
| 1629 |
+
0.389,
|
| 1630 |
+
0.49,
|
| 1631 |
+
0.566
|
| 1632 |
+
],
|
| 1633 |
+
"angle": 0,
|
| 1634 |
+
"content": "Our MLMD-news dataset is built on the publicly available multi-document summarization dataset Multi-News, through translation and filtering processes. During the construction of the dataset, we strictly adhered to academic ethical guidelines, respected data privacy and related rights, and ensured that the use of the data complied with ethical standards. At the same time, we implemented rigorous procedures and standards to guarantee the transparency and reliability of data processing, thus supporting credible research outcomes."
|
| 1635 |
+
},
|
| 1636 |
+
{
|
| 1637 |
+
"type": "title",
|
| 1638 |
+
"bbox": [
|
| 1639 |
+
0.115,
|
| 1640 |
+
0.593,
|
| 1641 |
+
0.214,
|
| 1642 |
+
0.608
|
| 1643 |
+
],
|
| 1644 |
+
"angle": 0,
|
| 1645 |
+
"content": "References"
|
| 1646 |
+
},
|
| 1647 |
+
{
|
| 1648 |
+
"type": "ref_text",
|
| 1649 |
+
"bbox": [
|
| 1650 |
+
0.116,
|
| 1651 |
+
0.615,
|
| 1652 |
+
0.49,
|
| 1653 |
+
0.682
|
| 1654 |
+
],
|
| 1655 |
+
"angle": 0,
|
| 1656 |
+
"content": "Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. 2023. Gpt-4 technical report. arXiv preprint arXiv:2303.08774."
|
| 1657 |
+
},
|
| 1658 |
+
{
|
| 1659 |
+
"type": "ref_text",
|
| 1660 |
+
"bbox": [
|
| 1661 |
+
0.117,
|
| 1662 |
+
0.692,
|
| 1663 |
+
0.489,
|
| 1664 |
+
0.731
|
| 1665 |
+
],
|
| 1666 |
+
"angle": 0,
|
| 1667 |
+
"content": "Akiko Aizawa. 2003. An information-theoretic perspective of tfidf measures. Information Processing & Management, 39(1):45-65."
|
| 1668 |
+
},
|
| 1669 |
+
{
|
| 1670 |
+
"type": "ref_text",
|
| 1671 |
+
"bbox": [
|
| 1672 |
+
0.117,
|
| 1673 |
+
0.742,
|
| 1674 |
+
0.488,
|
| 1675 |
+
0.794
|
| 1676 |
+
],
|
| 1677 |
+
"angle": 0,
|
| 1678 |
+
"content": "Florian Boudin, Stéphane Huet, and Juan-Manuel Torres-Moreno. 2011. A graph-based approach to cross-language multi-document summarization. Polibits, (43):113-118."
|
| 1679 |
+
},
|
| 1680 |
+
{
|
| 1681 |
+
"type": "ref_text",
|
| 1682 |
+
"bbox": [
|
| 1683 |
+
0.117,
|
| 1684 |
+
0.805,
|
| 1685 |
+
0.489,
|
| 1686 |
+
0.87
|
| 1687 |
+
],
|
| 1688 |
+
"angle": 0,
|
| 1689 |
+
"content": "Yuang Cai and Yuyu Yuan. 2024. Car-transformer: Cross-attention reinforcement transformer for crosslingual summarization. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 38, pages 17718-17726."
|
| 1690 |
+
},
|
| 1691 |
+
{
|
| 1692 |
+
"type": "ref_text",
|
| 1693 |
+
"bbox": [
|
| 1694 |
+
0.117,
|
| 1695 |
+
0.881,
|
| 1696 |
+
0.489,
|
| 1697 |
+
0.921
|
| 1698 |
+
],
|
| 1699 |
+
"angle": 0,
|
| 1700 |
+
"content": "Jaime Carbonell and Jade Goldstein. 1998. The use of mmr, diversity-based reranking for reordering documents and producing summaries. In Proceedings"
|
| 1701 |
+
},
|
| 1702 |
+
{
|
| 1703 |
+
"type": "list",
|
| 1704 |
+
"bbox": [
|
| 1705 |
+
0.116,
|
| 1706 |
+
0.615,
|
| 1707 |
+
0.49,
|
| 1708 |
+
0.921
|
| 1709 |
+
],
|
| 1710 |
+
"angle": 0,
|
| 1711 |
+
"content": null
|
| 1712 |
+
},
|
| 1713 |
+
{
|
| 1714 |
+
"type": "ref_text",
|
| 1715 |
+
"bbox": [
|
| 1716 |
+
0.529,
|
| 1717 |
+
0.086,
|
| 1718 |
+
0.884,
|
| 1719 |
+
0.126
|
| 1720 |
+
],
|
| 1721 |
+
"angle": 0,
|
| 1722 |
+
"content": "of the 21st annual international ACM SIGIR conference on Research and development in information retrieval, pages 335-336."
|
| 1723 |
+
},
|
| 1724 |
+
{
|
| 1725 |
+
"type": "ref_text",
|
| 1726 |
+
"bbox": [
|
| 1727 |
+
0.512,
|
| 1728 |
+
0.135,
|
| 1729 |
+
0.885,
|
| 1730 |
+
0.201
|
| 1731 |
+
],
|
| 1732 |
+
"angle": 0,
|
| 1733 |
+
"content": "Asli Celikyilmaz and Dilek Hakkani-Tur. 2010. A hybrid hierarchical model for multi-document summarization. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 815-824."
|
| 1734 |
+
},
|
| 1735 |
+
{
|
| 1736 |
+
"type": "ref_text",
|
| 1737 |
+
"bbox": [
|
| 1738 |
+
0.512,
|
| 1739 |
+
0.209,
|
| 1740 |
+
0.885,
|
| 1741 |
+
0.275
|
| 1742 |
+
],
|
| 1743 |
+
"angle": 0,
|
| 1744 |
+
"content": "Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Amy Yang, Angela Fan, et al. 2024. The llama 3 herd of models. arXiv preprint arXiv:2407.21783."
|
| 1745 |
+
},
|
| 1746 |
+
{
|
| 1747 |
+
"type": "ref_text",
|
| 1748 |
+
"bbox": [
|
| 1749 |
+
0.512,
|
| 1750 |
+
0.284,
|
| 1751 |
+
0.884,
|
| 1752 |
+
0.336
|
| 1753 |
+
],
|
| 1754 |
+
"angle": 0,
|
| 1755 |
+
"content": "Günes Erkan and Dragomir R Radev. 2004. Lexrank: Graph-based lexical centrality as salience in text summarization. Journal of artificial intelligence research, 22:457-479."
|
| 1756 |
+
},
|
| 1757 |
+
{
|
| 1758 |
+
"type": "ref_text",
|
| 1759 |
+
"bbox": [
|
| 1760 |
+
0.512,
|
| 1761 |
+
0.345,
|
| 1762 |
+
0.883,
|
| 1763 |
+
0.411
|
| 1764 |
+
],
|
| 1765 |
+
"angle": 0,
|
| 1766 |
+
"content": "Alexander R Fabbri, Irene Li, Tianwei She, Suyi Li, and Dragomir R Radev. 2019. Multi-news: A large-scale multi-document summarization dataset and abstractive hierarchical model. arXiv preprint arXiv:1906.01749."
|
| 1767 |
+
},
|
| 1768 |
+
{
|
| 1769 |
+
"type": "ref_text",
|
| 1770 |
+
"bbox": [
|
| 1771 |
+
0.512,
|
| 1772 |
+
0.42,
|
| 1773 |
+
0.883,
|
| 1774 |
+
0.46
|
| 1775 |
+
],
|
| 1776 |
+
"angle": 0,
|
| 1777 |
+
"content": "Luciano Floridi and Massimo Chiriatti. 2020. Gpt-3: Its nature, scope, limits, and consequences. *Minds and Machines*, 30:681–694."
|
| 1778 |
+
},
|
| 1779 |
+
{
|
| 1780 |
+
"type": "ref_text",
|
| 1781 |
+
"bbox": [
|
| 1782 |
+
0.512,
|
| 1783 |
+
0.469,
|
| 1784 |
+
0.884,
|
| 1785 |
+
0.547
|
| 1786 |
+
],
|
| 1787 |
+
"angle": 0,
|
| 1788 |
+
"content": "Giacomo Frisoni, Paolo Italiani, Stefano Salvatori, and Gianluca Moro. 2023. Cogito ergo summ: abstractive summarization of biomedical papers via semantic parsing graphs and consistency rewards. In Proceedings of the AAAI conference on artificial intelligence, volume 37, pages 12781-12789."
|
| 1789 |
+
},
|
| 1790 |
+
{
|
| 1791 |
+
"type": "ref_text",
|
| 1792 |
+
"bbox": [
|
| 1793 |
+
0.512,
|
| 1794 |
+
0.556,
|
| 1795 |
+
0.884,
|
| 1796 |
+
0.609
|
| 1797 |
+
],
|
| 1798 |
+
"angle": 0,
|
| 1799 |
+
"content": "Alireza Ghadimi and Hamid Beigy. 2022. Hybrid multi-document summarization using pre-trained language models. Expert Systems with Applications, 192:116292."
|
| 1800 |
+
},
|
| 1801 |
+
{
|
| 1802 |
+
"type": "ref_text",
|
| 1803 |
+
"bbox": [
|
| 1804 |
+
0.512,
|
| 1805 |
+
0.618,
|
| 1806 |
+
0.884,
|
| 1807 |
+
0.684
|
| 1808 |
+
],
|
| 1809 |
+
"angle": 0,
|
| 1810 |
+
"content": "George Giannakopoulos. 2013. Multi-document multilingual summarization and evaluation tracks in acl 2013 multiling workshop. In Proceedings of the multiling 2013 workshop on multilingual multi-document summarization, pages 20-28."
|
| 1811 |
+
},
|
| 1812 |
+
{
|
| 1813 |
+
"type": "ref_text",
|
| 1814 |
+
"bbox": [
|
| 1815 |
+
0.512,
|
| 1816 |
+
0.692,
|
| 1817 |
+
0.883,
|
| 1818 |
+
0.772
|
| 1819 |
+
],
|
| 1820 |
+
"angle": 0,
|
| 1821 |
+
"content": "Aria Haghighi and Lucy Vanderwende. 2009. Exploring content models for multi-document summarization. In Proceedings of human language technologies: The 2009 annual conference of the North American Chapter of the Association for Computational Linguistics, pages 362-370."
|
| 1822 |
+
},
|
| 1823 |
+
{
|
| 1824 |
+
"type": "ref_text",
|
| 1825 |
+
"bbox": [
|
| 1826 |
+
0.512,
|
| 1827 |
+
0.78,
|
| 1828 |
+
0.883,
|
| 1829 |
+
0.846
|
| 1830 |
+
],
|
| 1831 |
+
"angle": 0,
|
| 1832 |
+
"content": "Hanqi Jin and Xiaojun Wan. 2020. Abstractive multi-document summarization via joint learning with single-document summarization. In *Findings of the Association for Computational Linguistics: EMNLP* 2020, pages 2545-2554."
|
| 1833 |
+
},
|
| 1834 |
+
{
|
| 1835 |
+
"type": "ref_text",
|
| 1836 |
+
"bbox": [
|
| 1837 |
+
0.512,
|
| 1838 |
+
0.855,
|
| 1839 |
+
0.883,
|
| 1840 |
+
0.921
|
| 1841 |
+
],
|
| 1842 |
+
"angle": 0,
|
| 1843 |
+
"content": "Akanksha Joshi, Eduardo Fidalgo, Enrique Alegre, and Laura Fernández-Robles. 2023. Deepsumm: Exploiting topic models and sequence to sequence networks for extractive text summarization. Expert Systems with Applications, 211:118442."
|
| 1844 |
+
},
|
| 1845 |
+
{
|
| 1846 |
+
"type": "list",
|
| 1847 |
+
"bbox": [
|
| 1848 |
+
0.512,
|
| 1849 |
+
0.086,
|
| 1850 |
+
0.885,
|
| 1851 |
+
0.921
|
| 1852 |
+
],
|
| 1853 |
+
"angle": 0,
|
| 1854 |
+
"content": null
|
| 1855 |
+
},
|
| 1856 |
+
{
|
| 1857 |
+
"type": "page_number",
|
| 1858 |
+
"bbox": [
|
| 1859 |
+
0.481,
|
| 1860 |
+
0.929,
|
| 1861 |
+
0.521,
|
| 1862 |
+
0.941
|
| 1863 |
+
],
|
| 1864 |
+
"angle": 0,
|
| 1865 |
+
"content": "9263"
|
| 1866 |
+
}
|
| 1867 |
+
],
|
| 1868 |
+
[
|
| 1869 |
+
{
|
| 1870 |
+
"type": "ref_text",
|
| 1871 |
+
"bbox": [
|
| 1872 |
+
0.117,
|
| 1873 |
+
0.086,
|
| 1874 |
+
0.49,
|
| 1875 |
+
0.14
|
| 1876 |
+
],
|
| 1877 |
+
"angle": 0,
|
| 1878 |
+
"content": "Thang Le. 2024. Cross-lingual summarization with pseudo-label regularization. In *Findings of the Association for Computational Linguistics: NAACL* 2024, pages 4644–4677."
|
| 1879 |
+
},
|
| 1880 |
+
{
|
| 1881 |
+
"type": "ref_text",
|
| 1882 |
+
"bbox": [
|
| 1883 |
+
0.117,
|
| 1884 |
+
0.153,
|
| 1885 |
+
0.49,
|
| 1886 |
+
0.271
|
| 1887 |
+
],
|
| 1888 |
+
"angle": 0,
|
| 1889 |
+
"content": "Elvys Linhares Pontes, Stéphane Huet, Juan-Manuel Torres-Moreno, and Andréa Carneiro Linhares. 2018. Cross-language text summarization using sentence and multi-sentence compression. In *Natural Language Processing and Information Systems: 23rd International Conference on Applications of Natural Language to Information Systems*, NLDB 2018, Paris, France, June 13-15, 2018, Proceedings 23, pages 467-479. Springer."
|
| 1890 |
+
},
|
| 1891 |
+
{
|
| 1892 |
+
"type": "ref_text",
|
| 1893 |
+
"bbox": [
|
| 1894 |
+
0.117,
|
| 1895 |
+
0.284,
|
| 1896 |
+
0.49,
|
| 1897 |
+
0.35
|
| 1898 |
+
],
|
| 1899 |
+
"angle": 0,
|
| 1900 |
+
"content": "Marina Litvak and Mark Last. 2008. Graph-based keyword extraction for single-document summarization. In *Coling* 2008: Proceedings of the workshop multisource multilingual information extraction and summarization, pages 17-24."
|
| 1901 |
+
},
|
| 1902 |
+
{
|
| 1903 |
+
"type": "ref_text",
|
| 1904 |
+
"bbox": [
|
| 1905 |
+
0.117,
|
| 1906 |
+
0.363,
|
| 1907 |
+
0.488,
|
| 1908 |
+
0.442
|
| 1909 |
+
],
|
| 1910 |
+
"angle": 0,
|
| 1911 |
+
"content": "Shuaiqi Liu, Jiannong Cao, Ruosong Yang, and Zhiyuan Wen. 2021. Highlight-transformer: Leveraging key phrase aware attention to improve abstractive multi-document summarization. In *Findings of the Association for Computational Linguistics: ACL-IJCNLP* 2021, pages 5021-5027."
|
| 1912 |
+
},
|
| 1913 |
+
{
|
| 1914 |
+
"type": "ref_text",
|
| 1915 |
+
"bbox": [
|
| 1916 |
+
0.117,
|
| 1917 |
+
0.456,
|
| 1918 |
+
0.488,
|
| 1919 |
+
0.495
|
| 1920 |
+
],
|
| 1921 |
+
"angle": 0,
|
| 1922 |
+
"content": "Yang Liu and Mirella Lapata. 2019. Text summarization with pretrained encoders. arXiv preprint arXiv:1908.08345."
|
| 1923 |
+
},
|
| 1924 |
+
{
|
| 1925 |
+
"type": "ref_text",
|
| 1926 |
+
"bbox": [
|
| 1927 |
+
0.117,
|
| 1928 |
+
0.51,
|
| 1929 |
+
0.488,
|
| 1930 |
+
0.587
|
| 1931 |
+
],
|
| 1932 |
+
"angle": 0,
|
| 1933 |
+
"content": "Ziming Mao, Chen Henry Wu, Ansong Ni, Yusen Zhang, Rui Zhang, Tao Yu, Budhaditya Deb, Chenguang Zhu, Ahmed H Awadallah, and Dragomir Radev. 2021. Dyle: Dynamic latent extraction for abstractive long-input summarization. arXiv preprint arXiv:2110.08168."
|
| 1934 |
+
},
|
| 1935 |
+
{
|
| 1936 |
+
"type": "ref_text",
|
| 1937 |
+
"bbox": [
|
| 1938 |
+
0.117,
|
| 1939 |
+
0.602,
|
| 1940 |
+
0.488,
|
| 1941 |
+
0.668
|
| 1942 |
+
],
|
| 1943 |
+
"angle": 0,
|
| 1944 |
+
"content": "Laura Mascarell, Rabin Chalumattu, and Julien Heitmann. 2023. Entropy-based sampling for abstractive multi-document summarization in low-resource settings. In 16th International Natural Language Generation Conference (INGL 2023)."
|
| 1945 |
+
},
|
| 1946 |
+
{
|
| 1947 |
+
"type": "ref_text",
|
| 1948 |
+
"bbox": [
|
| 1949 |
+
0.117,
|
| 1950 |
+
0.682,
|
| 1951 |
+
0.488,
|
| 1952 |
+
0.733
|
| 1953 |
+
],
|
| 1954 |
+
"angle": 0,
|
| 1955 |
+
"content": "Jian-Ping Mei and Lihui Chen. 2012. Sumcr: A new subtopic-based extractive approach for text summarization. Knowledge and information systems, 31:527-545."
|
| 1956 |
+
},
|
| 1957 |
+
{
|
| 1958 |
+
"type": "ref_text",
|
| 1959 |
+
"bbox": [
|
| 1960 |
+
0.117,
|
| 1961 |
+
0.748,
|
| 1962 |
+
0.488,
|
| 1963 |
+
0.801
|
| 1964 |
+
],
|
| 1965 |
+
"angle": 0,
|
| 1966 |
+
"content": "Rada Mihalcea and Paul Tarau. 2004. Textrank: Bringing order into text. In Proceedings of the 2004 conference on empirical methods in natural language processing, pages 404-411."
|
| 1967 |
+
},
|
| 1968 |
+
{
|
| 1969 |
+
"type": "ref_text",
|
| 1970 |
+
"bbox": [
|
| 1971 |
+
0.117,
|
| 1972 |
+
0.814,
|
| 1973 |
+
0.488,
|
| 1974 |
+
0.855
|
| 1975 |
+
],
|
| 1976 |
+
"angle": 0,
|
| 1977 |
+
"content": "Paul Over and James Yen. 2004. An introduction to duc-2004. National Institute of Standards and Technology."
|
| 1978 |
+
},
|
| 1979 |
+
{
|
| 1980 |
+
"type": "ref_text",
|
| 1981 |
+
"bbox": [
|
| 1982 |
+
0.117,
|
| 1983 |
+
0.868,
|
| 1984 |
+
0.488,
|
| 1985 |
+
0.92
|
| 1986 |
+
],
|
| 1987 |
+
"angle": 0,
|
| 1988 |
+
"content": "Dragomir R Radev, Hongyan Jing, Malgorzata Stys, and Daniel Tam. 2004. Centroid-based summarization of multiple documents. Information Processing & Management, 40(6):919-938."
|
| 1989 |
+
},
|
| 1990 |
+
{
|
| 1991 |
+
"type": "list",
|
| 1992 |
+
"bbox": [
|
| 1993 |
+
0.117,
|
| 1994 |
+
0.086,
|
| 1995 |
+
0.49,
|
| 1996 |
+
0.92
|
| 1997 |
+
],
|
| 1998 |
+
"angle": 0,
|
| 1999 |
+
"content": null
|
| 2000 |
+
},
|
| 2001 |
+
{
|
| 2002 |
+
"type": "ref_text",
|
| 2003 |
+
"bbox": [
|
| 2004 |
+
0.513,
|
| 2005 |
+
0.086,
|
| 2006 |
+
0.883,
|
| 2007 |
+
0.139
|
| 2008 |
+
],
|
| 2009 |
+
"angle": 0,
|
| 2010 |
+
"content": "Yun-Zhu Song, Yi-Syuan Chen, and Hong-Han Shuai. 2022. Improving multi-document summarization through referenced flexible extraction with credit-awareness. arXiv preprint arXiv:2205.01889."
|
| 2011 |
+
},
|
| 2012 |
+
{
|
| 2013 |
+
"type": "ref_text",
|
| 2014 |
+
"bbox": [
|
| 2015 |
+
0.512,
|
| 2016 |
+
0.149,
|
| 2017 |
+
0.883,
|
| 2018 |
+
0.24
|
| 2019 |
+
],
|
| 2020 |
+
"angle": 0,
|
| 2021 |
+
"content": "Krysta Svore, Lucy Vanderwende, and Christopher Burges. 2007. Enhancing single-document summarization by combining ranknet and third-party sources. In Proceedings of the 2007 joint conference on empirical methods in natural language processing and computational natural language learning (EMNLP-CoNLL), pages 448-457."
|
| 2022 |
+
},
|
| 2023 |
+
{
|
| 2024 |
+
"type": "ref_text",
|
| 2025 |
+
"bbox": [
|
| 2026 |
+
0.512,
|
| 2027 |
+
0.25,
|
| 2028 |
+
0.883,
|
| 2029 |
+
0.316
|
| 2030 |
+
],
|
| 2031 |
+
"angle": 0,
|
| 2032 |
+
"content": "Yuqing Tang, Chau Tran, Xian Li, Peng-Jen Chen, Naman Goyal, Vishrav Chaudhary, Jiatao Gu, and Angela Fan. 2020. Multilingual translation with extensible multilingual pretraining and finetuning. arXiv preprint arXiv:2008.00401."
|
| 2033 |
+
},
|
| 2034 |
+
{
|
| 2035 |
+
"type": "ref_text",
|
| 2036 |
+
"bbox": [
|
| 2037 |
+
0.512,
|
| 2038 |
+
0.326,
|
| 2039 |
+
0.883,
|
| 2040 |
+
0.377
|
| 2041 |
+
],
|
| 2042 |
+
"angle": 0,
|
| 2043 |
+
"content": "Petar Velicković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. 2017. Graph attention networks. arXiv preprint arXiv:1710.10903."
|
| 2044 |
+
},
|
| 2045 |
+
{
|
| 2046 |
+
"type": "ref_text",
|
| 2047 |
+
"bbox": [
|
| 2048 |
+
0.512,
|
| 2049 |
+
0.388,
|
| 2050 |
+
0.883,
|
| 2051 |
+
0.455
|
| 2052 |
+
],
|
| 2053 |
+
"angle": 0,
|
| 2054 |
+
"content": "Xiaojun Wan. 2011. Using bilingual information for cross-language document summarization. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 1546-1555."
|
| 2055 |
+
},
|
| 2056 |
+
{
|
| 2057 |
+
"type": "ref_text",
|
| 2058 |
+
"bbox": [
|
| 2059 |
+
0.512,
|
| 2060 |
+
0.464,
|
| 2061 |
+
0.883,
|
| 2062 |
+
0.516
|
| 2063 |
+
],
|
| 2064 |
+
"angle": 0,
|
| 2065 |
+
"content": "Xiaojun Wan, Ziqiang Cao, Furu Wei, Sujian Li, and Ming Zhou. 2015. Multi-document summarization via discriminative summary reranking. arXiv preprint arXiv:1507.02062."
|
| 2066 |
+
},
|
| 2067 |
+
{
|
| 2068 |
+
"type": "ref_text",
|
| 2069 |
+
"bbox": [
|
| 2070 |
+
0.512,
|
| 2071 |
+
0.526,
|
| 2072 |
+
0.883,
|
| 2073 |
+
0.593
|
| 2074 |
+
],
|
| 2075 |
+
"angle": 0,
|
| 2076 |
+
"content": "Xiaojun Wan, Huiying Li, and Jianguo Xiao. 2010. Cross-language document summarization based on machine translation quality prediction. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 917-926."
|
| 2077 |
+
},
|
| 2078 |
+
{
|
| 2079 |
+
"type": "ref_text",
|
| 2080 |
+
"bbox": [
|
| 2081 |
+
0.512,
|
| 2082 |
+
0.602,
|
| 2083 |
+
0.883,
|
| 2084 |
+
0.667
|
| 2085 |
+
],
|
| 2086 |
+
"angle": 0,
|
| 2087 |
+
"content": "Xiaojun Wan, Fuli Luo, Xue Sun, Songfang Huang, and Jin-ge Yao. 2019. Cross-language document summarization via extraction and ranking of multiple summaries. Knowledge and Information Systems, 58:481-499."
|
| 2088 |
+
},
|
| 2089 |
+
{
|
| 2090 |
+
"type": "ref_text",
|
| 2091 |
+
"bbox": [
|
| 2092 |
+
0.512,
|
| 2093 |
+
0.678,
|
| 2094 |
+
0.883,
|
| 2095 |
+
0.73
|
| 2096 |
+
],
|
| 2097 |
+
"angle": 0,
|
| 2098 |
+
"content": "Danqing Wang, Pengfei Liu, Yining Zheng, Xipeng Qiu, and Xuanjing Huang. 2020. Heterogeneous graph neural networks for extractive document summarization. arXiv preprint arXiv:2004.12393."
|
| 2099 |
+
},
|
| 2100 |
+
{
|
| 2101 |
+
"type": "ref_text",
|
| 2102 |
+
"bbox": [
|
| 2103 |
+
0.512,
|
| 2104 |
+
0.74,
|
| 2105 |
+
0.883,
|
| 2106 |
+
0.805
|
| 2107 |
+
],
|
| 2108 |
+
"angle": 0,
|
| 2109 |
+
"content": "Dingding Wang, Shenghuo Zhu, Tao Li, and Yihong Gong. 2009. Multi-document summarization using sentence-based topic models. In Proceedings of the ACL-IJCNLP 2009 conference short papers, pages 297-300."
|
| 2110 |
+
},
|
| 2111 |
+
{
|
| 2112 |
+
"type": "ref_text",
|
| 2113 |
+
"bbox": [
|
| 2114 |
+
0.512,
|
| 2115 |
+
0.816,
|
| 2116 |
+
0.883,
|
| 2117 |
+
0.92
|
| 2118 |
+
],
|
| 2119 |
+
"angle": 0,
|
| 2120 |
+
"content": "Wen Xiao, Iz Beltagy, Giuseppe Carenini, and Arman Cohan. 2022. PRIMERA: pyramid-based masked sentence pre-training for multi-document summarization. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2022, Dublin, Ireland, May 22-27, 2022, pages 5245-5263. Association for Computational Linguistics."
|
| 2121 |
+
},
|
| 2122 |
+
{
|
| 2123 |
+
"type": "list",
|
| 2124 |
+
"bbox": [
|
| 2125 |
+
0.512,
|
| 2126 |
+
0.086,
|
| 2127 |
+
0.883,
|
| 2128 |
+
0.92
|
| 2129 |
+
],
|
| 2130 |
+
"angle": 0,
|
| 2131 |
+
"content": null
|
| 2132 |
+
},
|
| 2133 |
+
{
|
| 2134 |
+
"type": "page_number",
|
| 2135 |
+
"bbox": [
|
| 2136 |
+
0.481,
|
| 2137 |
+
0.929,
|
| 2138 |
+
0.521,
|
| 2139 |
+
0.941
|
| 2140 |
+
],
|
| 2141 |
+
"angle": 0,
|
| 2142 |
+
"content": "9264"
|
| 2143 |
+
}
|
| 2144 |
+
],
|
| 2145 |
+
[
|
| 2146 |
+
{
|
| 2147 |
+
"type": "ref_text",
|
| 2148 |
+
"bbox": [
|
| 2149 |
+
0.115,
|
| 2150 |
+
0.086,
|
| 2151 |
+
0.492,
|
| 2152 |
+
0.128
|
| 2153 |
+
],
|
| 2154 |
+
"angle": 0,
|
| 2155 |
+
"content": "L Xue. 2020. mt5: A massively multilingual pretrained text-to-text transformer. arXiv preprint arXiv:2010.11934."
|
| 2156 |
+
},
|
| 2157 |
+
{
|
| 2158 |
+
"type": "ref_text",
|
| 2159 |
+
"bbox": [
|
| 2160 |
+
0.117,
|
| 2161 |
+
0.136,
|
| 2162 |
+
0.49,
|
| 2163 |
+
0.19
|
| 2164 |
+
],
|
| 2165 |
+
"angle": 0,
|
| 2166 |
+
"content": "Michihiro Yasunaga, Rui Zhang, Kshitijh Meelu, Ayush Parek, Krishnan Srinivasan, and Dragomir Radev. 2017. Graph-based neural multi-document summarization. arXiv preprint arXiv:1706.06681."
|
| 2167 |
+
},
|
| 2168 |
+
{
|
| 2169 |
+
"type": "ref_text",
|
| 2170 |
+
"bbox": [
|
| 2171 |
+
0.117,
|
| 2172 |
+
0.199,
|
| 2173 |
+
0.49,
|
| 2174 |
+
0.252
|
| 2175 |
+
],
|
| 2176 |
+
"angle": 0,
|
| 2177 |
+
"content": "Jingqing Zhang, Yao Zhao, Mohammad Saleh, and Peter J. Liu. 2019. Pegasus: Pre-training with extracted gap-sentences for abstractive summarization. Preprint, arXiv:1912.08777."
|
| 2178 |
+
},
|
| 2179 |
+
{
|
| 2180 |
+
"type": "ref_text",
|
| 2181 |
+
"bbox": [
|
| 2182 |
+
0.117,
|
| 2183 |
+
0.261,
|
| 2184 |
+
0.49,
|
| 2185 |
+
0.38
|
| 2186 |
+
],
|
| 2187 |
+
"angle": 0,
|
| 2188 |
+
"content": "Junnan Zhu, Qian Wang, Yining Wang, Yu Zhou, Jiajun Zhang, Shaonan Wang, and Chengqing Zong. 2019. NCLS: Neural cross-lingual summarization. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3054–3064, Hong Kong, China. Association for Computational Linguistics."
|
| 2189 |
+
},
|
| 2190 |
+
{
|
| 2191 |
+
"type": "ref_text",
|
| 2192 |
+
"bbox": [
|
| 2193 |
+
0.117,
|
| 2194 |
+
0.389,
|
| 2195 |
+
0.49,
|
| 2196 |
+
0.456
|
| 2197 |
+
],
|
| 2198 |
+
"angle": 0,
|
| 2199 |
+
"content": "Markus Zopf. 2018. Auto-hmds: Automatic construction of a large heterogeneous multilingual multi-document summarization corpus. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)."
|
| 2200 |
+
},
|
| 2201 |
+
{
|
| 2202 |
+
"type": "list",
|
| 2203 |
+
"bbox": [
|
| 2204 |
+
0.115,
|
| 2205 |
+
0.086,
|
| 2206 |
+
0.492,
|
| 2207 |
+
0.456
|
| 2208 |
+
],
|
| 2209 |
+
"angle": 0,
|
| 2210 |
+
"content": null
|
| 2211 |
+
},
|
| 2212 |
+
{
|
| 2213 |
+
"type": "page_number",
|
| 2214 |
+
"bbox": [
|
| 2215 |
+
0.481,
|
| 2216 |
+
0.929,
|
| 2217 |
+
0.521,
|
| 2218 |
+
0.941
|
| 2219 |
+
],
|
| 2220 |
+
"angle": 0,
|
| 2221 |
+
"content": "9265"
|
| 2222 |
+
}
|
| 2223 |
+
]
|
| 2224 |
+
]
|
2025/A Mixed-Language Multi-Document News Summarization Dataset and a Graphs-Based Extract-Generate Model/a6e57a98-a206-4f15-a537-4cd8fddecece_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:cdc3c6aa0fde62b3553b90b63f445d40bbbc6662363b003f34605cdca1a23dad
|
| 3 |
+
size 2216935
|
2025/A Mixed-Language Multi-Document News Summarization Dataset and a Graphs-Based Extract-Generate Model/full.md
ADDED
|
@@ -0,0 +1,317 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# A Mixed-Language Multi-Document News Summarization Dataset and a Graphs-Based Extract-Generate Model
|
| 2 |
+
|
| 3 |
+
Shengxiang Gao $^{1,2}$ , Fang Nan $^{1,2}$ , Yongbing Zhang $^{1,2}$ , Yuxin Huang $^{1,2}$ , Kaiwen Tan $^{1,2*}$ , Zhengtao Yu $^{1,2}$
|
| 4 |
+
|
| 5 |
+
$^{1}$ Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming, China $^{2}$ Yunnan Key Laboratory of Artificial Intelligence, Kunming, China {kwtan}@kust.edu.cn
|
| 6 |
+
|
| 7 |
+
# Abstract
|
| 8 |
+
|
| 9 |
+
Existing research on news summarization primarily focuses on single-language single-document (SLSD), single-language multi-document (SLMD) or cross-language single-document (CLSD). However, in real-world scenarios, news about an international event often involves multiple documents in different languages, i.e., mixed-language multi-document (MLMD). Therefore, summarizing MLMD news is of great significance. However, the lack of datasets for MLMD news summarization has constrained the development of research in this area. To fill this gap, we construct a mixed-language multi-document news summarization dataset (MLMD-news), which contains four different languages and 10,992 source document cluster and target summary pairs. Additionally, we propose a graph-based extract-generate model and benchmark various methods on the MLMD-news dataset and publicly release our dataset and code<sup>1</sup>, aiming to advance research in summarization within MLMD scenarios.
|
| 10 |
+
|
| 11 |
+
# 1 Introduction
|
| 12 |
+
|
| 13 |
+
The news summarization task aims to simplify and condense a large volume of news content through automated methods, extracting key information and main viewpoints so that readers can quickly grasp the core content of the news. Existing research on news summarization primarily focuses on single-language single-document (SLSD)(Svore et al., 2007; Litvak and Last, 2008; Liu and Lapata, 2019), single-language multi-document (SLMD)(Haghighi and Vanderwende, 2009; Yasunaga et al., 2017; Wang et al., 2009) and cross-language single-document (CLSD) (Wan et al., 2010; Wan, 2011; Wan et al., 2019). However, in reality, many news articles, especially international news, appear in the form of mixed
|
| 14 |
+
|
| 15 |
+

|
| 16 |
+
Figure 1: The diagram of SLSD, SLMD, CLSD and MLMD. Each rounded rectangle represents a source document, while the pointed rectangle represents the target summary. "En", "De", "Fr" and "Es" indicate that the text is in English, German, French, and Spanish, respectively.
|
| 17 |
+
|
| 18 |
+

|
| 19 |
+
|
| 20 |
+
language multi-document (MLMD). Figure 1 illustrates the four tasks: SLSD, SLMD, CLSD, and MLMD.
|
| 21 |
+
|
| 22 |
+
It is noteworthy that, with the advancement of multi-language models such as mBART (Tang et al., 2020) and GPT (Floridi and Chiriatti, 2020; Achiam et al., 2023), a task referred to as multi-language multi-document news summarization has recently emerged (Giannakopoulos, 2013; Zopf, 2018; Mascarell et al., 2023). In the task, although the languages of different source document clusters vary, each individual source document cluster consists of multiple documents in the same language. Therefore, in each instance of summary generation for this type of task, it essentially falls under the category SLMD. In contrast, in MLMD, each individual source document cluster is composed of multiple documents in different languages. From this perspective, MLMD is more challenging than multi-language multi-document. The latter requires the model to have the capability to understand multiple documents in the current language during a single summary generation. In contrast,
|
| 23 |
+
|
| 24 |
+
MLMD requires the model to simultaneously possess the ability to understand multiple languages and multiple documents within a single summary generation.
|
| 25 |
+
|
| 26 |
+
However, the lack of MLMD news datasets has hindered progress in this field. Therefore, we first construct a MLMD-news dataset. This dataset includes documents in four languages: English, German, French, and Spanish, with a total of 10,992 source document clusters and corresponding target summaries. Each source document cluster is composed of multiple documents in different languages, and the corresponding target summary is in English. Additionally, we propose a graph-based extract-generate model for the MLMD task. This model first uses an extractor based on graph neural networks to extract key sentences from a source document cluster, and then employs a generator based on pre-trained models to generate the target summary based on these key sentences. Finally, we benchmark various methods on the MLMD-news dataset and publicly release our dataset and code to advance research in summarization within MLMD scenarios. The contributions of this paper are summarized as follows:
|
| 27 |
+
|
| 28 |
+
- We construct the first mixed-language multi-document (MLMD) dataset, where each source document cluster contains multiple news documents in different languages.
|
| 29 |
+
- We propose a graph-based extract-generate model as a benchmark for MLMD.
|
| 30 |
+
- We perform benchmark experiments on the MLMD using various methods and have publicly released the dataset and code to advance research in this field.
|
| 31 |
+
|
| 32 |
+
# 2 Related Work
|
| 33 |
+
|
| 34 |
+
The related work in news summarization research primarily focuses on three areas, as detailed below:
|
| 35 |
+
|
| 36 |
+
Single-Language Single-Document Summarization (SLSD): As shown in Figure 1.(a), the SLSD news summarization task takes a source document as input and outputs a target summary in the same language. Existing methods are mainly divided into two categories: extractive and abstractive. Extractive summarization constructs the target summary by directly selecting key sentences or paragraphs from the source document, such as TextRank (Mihalcea and Tarau, 2004) and DeepSumm (Joshi et al., 2023). Abstractive summarization, on
|
| 37 |
+
|
| 38 |
+
the other hand, involves first understanding the content of the source document and then generating new summary sentences for the target summary, such as BERTSUM (Liu and Lapata, 2019) and COGITOERGOSUMM (Frisoni et al., 2023).
|
| 39 |
+
|
| 40 |
+
Cross-Language Single-Document Summarization (CLSD): As shown in Figure 1.(b), the CLSD news summarization task takes a source document as input and produces a target summary in a different language. Existing research is primarily divided into pipeline-based and end-to-end. Traditional CLSD methods typically use a pipeline-based methods (Boudin et al., 2011; Linhares Pontes et al., 2018), where the source document is first translated and then summarized, or the summary is generated first and then translated into the target language. In recent years, researchers have increasingly focused on end-to-end CLSD methods (Le, 2024; Cai and Yuan, 2024), which can directly generate summaries in the target language, significantly reducing the risk of error propagation.
|
| 41 |
+
|
| 42 |
+
Single-Language Multi-Document Summarization (SLMD): As shown in Figure 1.(c), the SLMD news summarization task takes a source document cluster as input, which contains multiple documents, and the output is a target summary in the same language. Existing methods can be categorized into extractive, abstractive, and hybrid. In the early days, due to the small sample size of SLMD datasets like DUC 2004 (Over and Yen, 2004), research on multi-document summarization primarily relied on extractive methods (Mei and Chen, 2012; Wan et al., 2015). In recent years, the emergence of large-scale SLMD datasets, such as Multi-News(Fabbri et al., 2019), has accelerated the development of abstractive (Jin and Wan, 2020; Liu et al., 2021) and hybrid SLMD news summarization (Celikyilmaz and Hakkani-Tur, 2010; Song et al., 2022; Ghadimi and Beigy, 2022).
|
| 43 |
+
|
| 44 |
+
Recently, with the development of multi-language models such as mBART (Tang et al., 2020) and GPT (Floridi and Chiriatti, 2020; Achiam et al., 2023), a task known as multi-language multi-document news summarization (Giannakopoulos, 2013; Zopf, 2018; Mascarell et al., 2023) has emerged within the SLMD paradigm. This task involves inputs and outputs similar to those in SLMD, where a source document cluster is input and a target summary in the same language is produced. The difference lies in that the languages of different source document clusters can
|
| 45 |
+
|
| 46 |
+

|
| 47 |
+
Figure 2: The diagram illustrates the construction process of the MLMD-news dataset. First, a round-trip translation (RTT) strategy is employed to translate each news document in the source document clusters of the Multi-News dataset into multiple languages and then back into the original language. This process allows the calculation of the ROUGE-1 score matrix for the document cluster. Based on this score matrix, a greedy selection strategy is used to assign a corresponding language to each news document. The original content of the news document is then replaced with the translated content in the assigned language, resulting in a source document cluster with mixed languages.
|
| 48 |
+
|
| 49 |
+
vary, thereby further requiring the model to have multilingual understanding capabilities.
|
| 50 |
+
|
| 51 |
+
# 3 The MLMD-news dataset
|
| 52 |
+
|
| 53 |
+
The overall process of constructing the MLMD-news dataset is illustrated in Figure 2. The MLMD-news dataset is built upon the Multi-News dataset, which is a well-known and widely used English multi-document summarization dataset. The construction process employs a round-trip translation strategy and a greedy selection strategy, to approximate real-world cases. Similar methods, like the NCLS dataset (Zhu et al., 2019) in cross-language summarization, have been widely accepted and used. The main goal of the round-trip translation strategy is to calculate a ROUGE-1 score matrix that reflects translation quality, while the greedy selection strategy is used to assign the most suitable language to each news document in the source document cluster and make the necessary replacements.
|
| 54 |
+
|
| 55 |
+
# 3.1 Round-trip Translation Strategy
|
| 56 |
+
|
| 57 |
+
The round-trip translation strategy first uses machine translation services² to translate text from the original language into another language (forward translation) and then uses machine translation services again to translate the text back from the other language into the original language (back translation). This strategy has been utilized by Zhu et al. (2019) to construct cross-language single-document summarization datasets from single-language single-document summarization datasets.
|
| 58 |
+
|
| 59 |
+
Therefore, we use the round-trip translation strategy to construct MLMD-news dataset. First, the original English news documents from the MultiNews dataset are translated into Spanish, French, and German through forward translation. Then, these translated documents are back-translated into English. The English documents obtained from the back translation of each language are compared with the original English news documents, and ROUGE-1 scores are calculated. If the ROUGE-1 score for a particular language is below a threshold, it is set to zero. Conversely, if the ROUGE-1 score is equal to or above the threshold, the score is retained, resulting in a ROUGE-1 score matrix (where each row represents a document and each column corresponds to a language).
|
| 60 |
+
|
| 61 |
+
# 3.2 Greedy Selection Strategy
|
| 62 |
+
|
| 63 |
+
As shown in Figure 2, after obtaining the ROUGE-1 score matrix, a greedy selection strategy is used to assign a language to each news document in the document cluster from Multi-News dataset. Specifically, this involves first identifying the row and column of the maximum value in the matrix, and assigning the language indicated by the column to the document indicated by the row. The corresponding row and column are then removed to form a new submatrix. This process is repeated until all news documents have been assigned a language. If at any step all values in the matrix are found to be zero, the language of remaining news documents in the submatrix is assigned as English. After completing the language assignment, each news document is transformed into the assigned language using the forward translation of the round-trip translation, replacing the content of the source document. If the assigned language is English, the document remains in its original English form. This results in a mixed-language document cluster.
|
| 64 |
+
|
| 65 |
+
<table><tr><td></td><td></td><td>Train</td><td>Vaild</td><td>Test</td></tr><tr><td rowspan="6">Total</td><td>#</td><td>8444</td><td>1277</td><td>1271</td></tr><tr><td>Avg.Doc</td><td>2.79</td><td>2.75</td><td>2.71</td></tr><tr><td>Avg ClusterWords</td><td>2442.97</td><td>2457.48</td><td>2255.81</td></tr><tr><td>Avg ClusterSents</td><td>84.14</td><td>85.49</td><td>77.85</td></tr><tr><td>Avg.SumWords</td><td>269.17</td><td>268.32</td><td>265.70</td></tr><tr><td>Avg.SumSents</td><td>9.70</td><td>9.60</td><td>9.56</td></tr><tr><td rowspan="3">En</td><td>Count</td><td>7088</td><td>1027</td><td>1009</td></tr><tr><td>Avg.DocWords</td><td>653.19</td><td>732.79</td><td>706.98</td></tr><tr><td>Avg.DocSents</td><td>24.52</td><td>27.18</td><td>26.00</td></tr><tr><td rowspan="3">Fr</td><td>Count</td><td>5307</td><td>779</td><td>779</td></tr><tr><td>Avg.DocWords</td><td>1020.20</td><td>969.46</td><td>951.30</td></tr><tr><td>Avg.DocSents</td><td>32.99</td><td>31.93</td><td>31.53</td></tr><tr><td rowspan="3">De</td><td>Count</td><td>4431</td><td>693</td><td>646</td></tr><tr><td>Avg.DocWords</td><td>981.05</td><td>1036.24</td><td>825.07</td></tr><tr><td>Avg.DocSents</td><td>32.96</td><td>35.34</td><td>26.93</td></tr><tr><td rowspan="3">Es</td><td>Count</td><td>6769</td><td>1015</td><td>1009</td></tr><tr><td>Avg.DocWords</td><td>1047.78</td><td>1048.92</td><td>1015.58</td></tr><tr><td>Avg.DocSents</td><td>36.20</td><td>36.64</td><td>35.44</td></tr></table>
|
| 66 |
+
|
| 67 |
+
Table 1: Statistics of the MLMD-news dataset. "#" represents the number of source document cluster and target summary pairs. "Avg.Doc", "Avg ClusterWords" and "Avg.ClusterSents" indicate the average number of documents, average number of tokens, and average number of sentences per source document cluster, respectively. "Avg.SumWords" and "Avg.SumSents" denote the average number of tokens and average number of sentences in the target summary. "Count", "Avg.DocWords" and "Avg.DocSents" represent the total number of documents, average number of tokens per document, and average number of sentences per document, respectively.
|
| 68 |
+
|
| 69 |
+
Finally, this mixed-language document cluster is combined with the original target summary to form an MLMD summary pair.
|
| 70 |
+
|
| 71 |
+
# 3.3 Statistics and Analysis
|
| 72 |
+
|
| 73 |
+
Through the aforementioned process, we constructed the MLMD-news dataset, which contains 10,992 pairs of source document clusters and target summaries. The source document clusters include four languages: English, French, German, and Spanish, while the target summaries are all in English. The dataset was divided into training, validation, and test sets. Table 1 presents the statistical information of the MLMD-news dataset. Figure 3 shows the number of news documents in different languages across the training, validation, and test sets. Due to the quality control implemented through the round-trip translation strategy during processing, there are differences in the proportions of news documents in different languages.
|
| 74 |
+
|
| 75 |
+

|
| 76 |
+
Figure 3: The number of news documents in different languages across the training, validation, and test sets.
|
| 77 |
+
|
| 78 |
+

|
| 79 |
+
Figure 4: The proportion of the number of languages involved in the source document clusters across the training, validation, and test sets.
|
| 80 |
+
|
| 81 |
+
The number of English and Spanish documents is roughly equal, French ranks third, while German has the fewest articles. Figure 4 illustrates the proportion of the number of languages involved in the source document clusters across the training, validation, and test sets. The proportions are relatively consistent, indicating a stable language distribution. Source document clusters containing 2 languages are the most common, while those containing 4 languages are the least common.
|
| 82 |
+
|
| 83 |
+
# 4 Graph-based Extract-Generate Model
|
| 84 |
+
|
| 85 |
+
To the best of our knowledge, there is currently no method specifically designed for the MLMD task. The input for MLMD news summarization consists of multiple mixed-language documents, which presents two main challenges: the excessive length of the input and the complex relationships between multiple documents and languages. Therefore, we propose a graph-based extractive-generative model (as shown in Figure 5) as a baseline for this task. The extract-then-generate approach addresses the issue of long input, while the graph is used to model the complex relationships between multiple documents and languages. The model consists of three main modules: Graph Construction, Extractor, and Generator. This section will provide a detailed explanation of these three modules.
|
| 86 |
+
|
| 87 |
+

|
| 88 |
+
Figure 5: The framework of the extract-generate model involves three main components. In the Graph Construction, mixed-language source document clusters are constructed into both homogeneous and heterogeneous graphs. The Extractor extracts key sentences from the source document cluster, while the Generator generates a summary based on the sentences extracted by the Extractor.
|
| 89 |
+
|
| 90 |
+
# 4.1 Graph Construction
|
| 91 |
+
|
| 92 |
+
In order to model the complex relationships between multiple documents and languages, we constructed a homogeneous graph between sentences, as well as a heterogeneous graph between sentences and words, for each input mixed-language document cluster.
|
| 93 |
+
|
| 94 |
+
# 4.1.1 Homogeneous Graph Construction
|
| 95 |
+
|
| 96 |
+
Let $G_{1} = \{V_{1},E_{1}\}$ denote a homogeneous graph, where the node set $V_{1} = \{s_{1},s_{2},\ldots ,s_{n}\}$ corresponds to the sentences within the document cluster, and the edge set $E_{1} = \{e_{1,1},e_{1,3},\dots ,e_{n,n}\}$ denotes the connections between sentences that share common words. Moreover, we refer to BERTSUM (Liu and Lapata, 2019) to obtain the initial representation of nodes $H_s^{(0)}$ .
|
| 97 |
+
|
| 98 |
+
# 4.1.2 Heterogeneous Graph Construction
|
| 99 |
+
|
| 100 |
+
Let $G_{2} = \{V_{2}, E_{2}\}$ denote a heterogeneous graph, where $V_{2}$ is the set of nodes and $E_{2}$ is the set of edges. In this graph, the nodes can be represented as $V_{2} = V_{1} \cup V_{w}$ , where $V_{w} = \{w_{1}, w_{2}, \ldots, w_{k}\}$ is the set of words. The edges, denoted as $E_{2} = \{e_{1,1}, \ldots, e_{1,k}, \ldots, e_{n,1}, \ldots, e_{n,k}\}$ , represent the connections between the $i^{th}$ sentence and the $j^{th}$ word, with edge weights determined by TF-IDF (Aizawa, 2003). We use mBERT to initialize the static embeddings of word node as $H_{w}^{(0)}$ . To obtain the representations of sentence node, We first concatenate the tokens of each word corresponding to a sentence and then input them into mBERT to create
|
| 101 |
+
|
| 102 |
+
the initial representation of the sentence node as $H_{s}$ . Next, we employ a convolutional neural network (CNN) to capture local information within the sentence. To extract sentence-level features, we apply a bidirectional long short-term memory (BiLSTM) network to capture contextual dependencies. Finally, by concatenating the outputs of the CNN and BiLSTM, we generate the representation of sentence node that encompasses both intra-sentence and inter-sentence information, denoted as $H_{s}^{(0)}$ .
|
| 103 |
+
|
| 104 |
+
# 4.2 Extractor
|
| 105 |
+
|
| 106 |
+
In the extractor, we first perform sentence representation learning and then extract key sentences.
|
| 107 |
+
|
| 108 |
+
# 4.2.1 Sentence Representation Learning
|
| 109 |
+
|
| 110 |
+
Before extracting key sentences, we first use GAT (Graph Attention Network) (Velicković et al., 2017) and heterogeneous GAT (Wang et al., 2020) to learn sentence representations on the homogeneous and heterogeneous graphs, respectively.
|
| 111 |
+
|
| 112 |
+
In the homogeneous graph, we calculate the sentence representation $H_{s}^{(1)}$ using the following formula:
|
| 113 |
+
|
| 114 |
+
$$
|
| 115 |
+
U _ {s \rightarrow s} ^ {\prime (1)} = \operatorname {G A T} _ {s 2 s} \left(H _ {s} ^ {\prime (0)}, H _ {s} ^ {\prime (0)}, H _ {s} ^ {\prime (0)}\right)
|
| 116 |
+
$$
|
| 117 |
+
|
| 118 |
+
$$
|
| 119 |
+
H _ {s} ^ {\prime (1)} = \mathrm {F N N} \left(H _ {s} ^ {\prime (0)} + U _ {s \rightarrow s} ^ {\prime (1)}\right)
|
| 120 |
+
$$
|
| 121 |
+
|
| 122 |
+
In the heterogeneous graph, we learn the sentence representation $H_{s}^{(n)}$ through $n$ iterations, where the iteration process at step $t + 1$ is as follows:
|
| 123 |
+
|
| 124 |
+
$$
|
| 125 |
+
U _ {s \rightarrow w} ^ {(t + 1)} = \operatorname {G A T} _ {s 2 w} \left(H _ {w} ^ {(t)}, H _ {s} ^ {(t)}, H _ {s} ^ {(t)}\right)
|
| 126 |
+
$$
|
| 127 |
+
|
| 128 |
+
$$
|
| 129 |
+
H _ {w} ^ {(t + 1)} = \operatorname {F N N} \left(H _ {w} ^ {(t)} + U _ {s \rightarrow w} ^ {(t + 1)}\right) \tag {2}
|
| 130 |
+
$$
|
| 131 |
+
|
| 132 |
+
$$
|
| 133 |
+
U _ {w \rightarrow s} ^ {(t + 1)} = \mathrm {G A T} _ {w 2 s} \left(H _ {s} ^ {(t)}, H _ {w} ^ {(t + 1)}, H _ {w} ^ {(t + 1)}\right) \tag {2}
|
| 134 |
+
$$
|
| 135 |
+
|
| 136 |
+
$$
|
| 137 |
+
H _ {s} ^ {(t + 1)} = \mathrm {F N N} (H _ {s} ^ {(t)}, U _ {w \rightarrow s} ^ {(t + 1)})
|
| 138 |
+
$$
|
| 139 |
+
|
| 140 |
+
# 4.2.2 Extracting Key Sentences
|
| 141 |
+
|
| 142 |
+
To extract key sentences, we first concatenate the sentence representations obtained from the homogeneous graph sentence representation learning, which capture inter-sentence relationships, with the sentence representations obtained from the heterogeneous graph sentence representation learning, which capture intra-sentence relationships. This results in the final sentence representation that incorporates both inter-sentence and intra-sentence relationships. Then, we use top- $K$ selection as defined in Section 4.4 to extract the indices and scores of the top- $K$ key sentences. The above process can be represented as follows:
|
| 143 |
+
|
| 144 |
+
$$
|
| 145 |
+
H _ {s} ^ {*} = H _ {s} ^ {\prime (1)} \oplus H _ {s} ^ {(n)} \tag {3}
|
| 146 |
+
$$
|
| 147 |
+
|
| 148 |
+
$$
|
| 149 |
+
i n d i c e s, \text {s c o r e} = \operatorname {t o p} - K \left(H _ {s} ^ {*}\right)
|
| 150 |
+
$$
|
| 151 |
+
|
| 152 |
+
Finally, we use these indices to locate key sentences, and combine them to form the key snippet $X_{key}$ .
|
| 153 |
+
|
| 154 |
+
# 4.3 Generator
|
| 155 |
+
|
| 156 |
+
In the generator, we first input $x \in X_{key}$ into mBART and obtain $h_x^t$ , which is the model's output before the final language model head. Then, $h_x^t$ is fed into the final language model head and a multilayer perceptron (MLP) to obtain a generation probability $P_{\theta}(y_t|x,y_{< t})$ and a dynamic weight $P_{\theta}(x|X_{key},y_{< t})$ , respectively. Here, the dynamic weight represents the probability of selecting $x$ from $X_{key}$ for summary generation. Therefore, the generation probability of the final summary $y$ is calculated by marginalizing (MARG) as follows:
|
| 157 |
+
|
| 158 |
+
$$
|
| 159 |
+
\begin{array}{l} P _ {\theta} (y | x, X _ {k e y}) = \prod_ {t = 1} ^ {T} \sum_ {x \in X _ {k e y}} \left(P _ {\theta} \left(y _ {t} | x, y _ {< t}\right) \right. \\ P _ {\theta} (x \mid X _ {k e y}, y _ {< t})) \tag {4} \\ \end{array}
|
| 160 |
+
$$
|
| 161 |
+
|
| 162 |
+
# 4.4 Loss
|
| 163 |
+
|
| 164 |
+
Extractor Loss: For the extractor, we use cross-entropy to measure the loss of key sentence extraction:
|
| 165 |
+
|
| 166 |
+
$$
|
| 167 |
+
\mathcal {L} _ {e x t} = - (z \log (\hat {z}) + (1 - z) \log (1 - \hat {z})) \tag {5}
|
| 168 |
+
$$
|
| 169 |
+
|
| 170 |
+
where $\hat{z}$ is the predicted result, which can be computed using the indices from Eq.(3), and $z$ is the true label. The calculation process is as follows: First, we use mBERT to represent all the sentences in the source document clusters and the target summary. Then, we calculate the cosine similarity between these sentences and the target summary, and label the top- $K$ sentences with the highest similarity as key sentences.
|
| 171 |
+
|
| 172 |
+
Generator Loss: For the generator, we use the negative log-likelihood loss (NLL) to measure the loss:
|
| 173 |
+
|
| 174 |
+
$$
|
| 175 |
+
\mathcal {L} _ {\text {g e n}} = - \log P _ {\theta} (y | x, X _ {\text {k e y}}) \tag {6}
|
| 176 |
+
$$
|
| 177 |
+
|
| 178 |
+
Consistency Loss: The dynamic weight $P_{\theta}(x|X_{key},y_{< t})$ of generator represents the probability of selecting $x$ from $X_{key}$ at the $t$ -th time step, essentially serving the same function as the extractor. Therefore, we adopt a KL divergence-based Consistency Loss proposed by Mao et al. (2021) to quantify the difference between the average dynamic weight and the extractor's predicted scores:
|
| 179 |
+
|
| 180 |
+
$$
|
| 181 |
+
\mathcal {L} _ {c o n} = \mathrm {K L} \left(\frac {1}{T} \sum_ {t = 1} ^ {T} \left(P _ {\theta} \left(x \mid X _ {k e y}, y _ {< t}\right)\right), \right. \tag {7}
|
| 182 |
+
$$
|
| 183 |
+
|
| 184 |
+
$$
|
| 185 |
+
\operatorname {S o f t m a x} (s c o r e))
|
| 186 |
+
$$
|
| 187 |
+
|
| 188 |
+
Total Loss: The overall model loss can be defined as follows:
|
| 189 |
+
|
| 190 |
+
$$
|
| 191 |
+
\mathcal {L} _ {\text {t o t a l}} = \lambda_ {\text {e x t}} \mathcal {L} _ {\text {e x t}} + \lambda_ {\text {g e n}} \mathcal {L} _ {\text {g e n}} + \lambda_ {\text {c o n}} \mathcal {L} _ {\text {c o n}} \tag {8}
|
| 192 |
+
$$
|
| 193 |
+
|
| 194 |
+
where $\lambda_{ext}$ , $\lambda_{gen}$ , and $\lambda_{con}$ are hyperparameters.
|
| 195 |
+
|
| 196 |
+
# 5 Experiments
|
| 197 |
+
|
| 198 |
+
In this section, we will introduce the baselines we used and present the implementation details.
|
| 199 |
+
|
| 200 |
+
# 5.1 Baselines
|
| 201 |
+
|
| 202 |
+
To benchmark the MLMD-news dataset, in addition to our proposed graph-based extractive-generative method, we also used the following baselines, which can be categorized into Extract-then-translate, Translate-then-extract, Abstractive models, LLM, and Extract-then-abstract.
|
| 203 |
+
|
| 204 |
+
Extract-then-translate: First, summaries are extracted from the source document cluster using classic extractive models such as Centroid (Radev et al., 2004), LexRank (Erkan and Radev, 2004), MMR (Carbonell and Goldstein, 1998), and TextRank (Mihalcea and Tarau, 2004), and then translated into the target language.
|
| 205 |
+
|
| 206 |
+
Translate-then-extract: First, the documents in the source document cluster are translated into the target language, and then summaries are extracted using classic extractive models such as Centroid, LexRank, MMR, and TextRank.
|
| 207 |
+
|
| 208 |
+
Translate-then-MDS: First, the documents in the source document cluster are translated into the target language, and then summaries are generated using SLMD models such as PRIMERA(Xiao et al., 2022), PEGASUS(Zhang et al., 2019).
|
| 209 |
+
|
| 210 |
+
Abstractive models: Use mT5 (Xue, 2020) and mBART (Tang et al., 2020), which have multi-language understanding and generation capabilities, to directly generate summaries from the source document cluster. If the input exceeds the model's capacity, the excess parts will be truncated.
|
| 211 |
+
|
| 212 |
+
LLM: Use models such as GPT-3.5-turbo-16k³, GPT-4.0-turbo-32k³, Llama-3.1-8B-Instruct(Dubey et al., 2024), Gemini-1.5-pro⁴, and Claude-2.1⁵, which have multi-language and long input capabilities, to directly generate summaries from the source document cluster.
|
| 213 |
+
|
| 214 |
+
Extract-then-abstract: First, use classic extractive models such as Centroid and TextRank to extract summaries from the source document cluster, and then generate the target summary using generative models like mT5 and mBART.
|
| 215 |
+
|
| 216 |
+
# 5.2 Implementation Details
|
| 217 |
+
|
| 218 |
+
In constructing the MLMD-news, the ROUGE-1 thresholds for French, German, and Spanish were set to 88.03, 87.05, and 89.25, respectively, based on the average ROUGE-1 scores for various language news. For the Graph-based Extract-Generate model, we set $\lambda_{\mathrm{ext}} = 1$ , $\lambda_{\mathrm{gen}} = 0.1$ , and $\lambda_{\mathrm{con}} = 0.0001$ . The extractor's learning rate was set to $5 \times 10^{-6}$ , while the generator's learning rate was $5 \times 10^{-5}$ . The batch size was 8, and top- $K$ was set to 10. The ROUGE is calculated by pyrouge<sup>6</sup>. All experiments were conducted on NVIDIA L20 GPUs. In addition, the total number of parameters in Graph-based Extract-Generate model is about 800M.
|
| 219 |
+
|
| 220 |
+
# 5.3 Benchmark Experiments
|
| 221 |
+
|
| 222 |
+
In Table 2, we present the ROUGE scores for different methods on the MLMD-news dataset, and the following observations can be made:
|
| 223 |
+
|
| 224 |
+
<table><tr><td></td><td>R-1</td><td>R-2</td><td>R-L</td></tr><tr><td colspan="4">Extract-then-translate</td></tr><tr><td>Centroid</td><td>27.90</td><td>6.92</td><td>23.35</td></tr><tr><td>LexRank</td><td>28.61</td><td>7.30</td><td>24.27</td></tr><tr><td>MMR</td><td>24.07</td><td>5.61</td><td>20.23</td></tr><tr><td>TextRank</td><td>28.66</td><td>7.28</td><td>24.13</td></tr><tr><td colspan="4">Translate-then-extract</td></tr><tr><td>Centroid</td><td>29.16</td><td>7.64</td><td>23.60</td></tr><tr><td>LexRank</td><td>31.12</td><td>8.53</td><td>25.70</td></tr><tr><td>MMR</td><td>25.58</td><td>6.11</td><td>20.93</td></tr><tr><td>TextRank</td><td>30.18</td><td>8.04</td><td>24.55</td></tr><tr><td colspan="4">Translate-then-MDS</td></tr><tr><td>PEGASUS</td><td>35.54</td><td>9.44</td><td>29.95</td></tr><tr><td>PRIMERA</td><td>36.83</td><td>10.79</td><td>32.12</td></tr><tr><td colspan="4">Abstractive models</td></tr><tr><td>mBART(1024)</td><td>36.84</td><td>8.13</td><td>32.22</td></tr><tr><td>mT5(1024)</td><td>33.21</td><td>6.26</td><td>27.43</td></tr><tr><td colspan="4">LLM</td></tr><tr><td>GPT-3.5-turbo-16k</td><td>34.36</td><td>8.88</td><td>30.74</td></tr><tr><td>GPT-4.0-turbo-32k</td><td>39.02</td><td>10.45</td><td>34.68</td></tr><tr><td>Llama-3.1-8B-Instruct</td><td>36.61</td><td>10.44</td><td>33.31</td></tr><tr><td>Gemini-1.5-pro</td><td>40.79</td><td>12.05</td><td>36.59</td></tr><tr><td>Claude-2.1</td><td>40.51</td><td>11.67</td><td>36.16</td></tr><tr><td colspan="4">Extract-then-abstarct</td></tr><tr><td>TextRank-then-mBART</td><td>32.00</td><td>5.84</td><td>28.00</td></tr><tr><td>Centroid-then-mBART</td><td>32.76</td><td>5.70</td><td>28.71</td></tr><tr><td>TextRank-then-mT5</td><td>31.63</td><td>5.22</td><td>26.46</td></tr><tr><td>Centroid-then-mT5</td><td>31.39</td><td>5.21</td><td>26.25</td></tr><tr><td>Our</td><td>39.16</td><td>9.64</td><td>34.02</td></tr></table>
|
| 225 |
+
|
| 226 |
+
Table 2: The benchmark experimental results on the MLMD-news dataset.
|
| 227 |
+
|
| 228 |
+
- The ROUGE scores for the Extract-then-translate methods are quite low, which can be attributed to the limited support of classic extractive methods for mixed languages and the translation of extracted sentences.
|
| 229 |
+
- The ROUGE scores for the Translate-then-extract methods are higher than those for the Extract-then-translate methods, possibly because these classic extractive methods perform better with single-language input.
|
| 230 |
+
- TThe ROUGE score of the Translate-then-MDS method is significantly higher than that of the Translate-then-extract methods. This result is evident because the MDS method has been improved for multi-document processing, while traditional extraction methods have not.
|
| 231 |
+
- Abstractive models show a significant advantage in ROUGE scores compared to the Extract-then-translate and Translate-then
|
| 232 |
+
|
| 233 |
+

|
| 234 |
+
Figure 6: Parameter Sensitivity of top- $K$ on the ROUGE score.
|
| 235 |
+
|
| 236 |
+
extract methods, possibly because they possess strong multi-language understanding capabilities. Additionally, the performance of Abstractive models is similar to Translatethen-MDS, indicating that having multidocument processing capability and multilingual understanding are equally important in MLMD.
|
| 237 |
+
|
| 238 |
+
- The best and second-best results are achieved by LLMs, mainly due to their strong multi-language understanding and generation capabilities, as well as their ability to accept very long input documents.
|
| 239 |
+
- Aside from our proposed method, other Extract-then-abstract methods have lower ROUGE scores compared to Abstractive models. This suggests that inappropriate extraction may not only fail to enhance summarization performance but could also lead to poorer final results due to the loss of important information.
|
| 240 |
+
- The results indicate that our method addresses the above issues of other Extract-then-abstract methods and achieves performance close to that of LLMs, demonstrating the effectiveness of our method.
|
| 241 |
+
|
| 242 |
+
# 5.4 Ablation Study
|
| 243 |
+
|
| 244 |
+
In Table 3, we present the impact of different modules of our model on the MLMD-news dataset, including the extractor module, generator module, and consistency loss. When the extractor module is removed (i.e.w/o extractor), our method degenerates to mBART, resulting in decreases of 2.32 points, 1.51 points, and 1.8 points in ROUGE-1, ROUGE-2, and ROUGE-L, respectively. This indicates that extracting key sentences significantly impacts the overall quality of the summary. When the generator module is removed (i.e.w/o generator), the extracted sentences are multi-language, and using machine translation to convert them into
|
| 245 |
+
|
| 246 |
+
<table><tr><td></td><td>R-1</td><td>R-2</td><td>R-L</td></tr><tr><td>Our</td><td>39.16</td><td>9.64</td><td>34.02</td></tr><tr><td>w/o extractor</td><td>36.84</td><td>8.13</td><td>32.22</td></tr><tr><td>w/o generator</td><td>34.17</td><td>8.45</td><td>31.04</td></tr><tr><td>w/o consistency</td><td>38.81</td><td>9.41</td><td>33.85</td></tr></table>
|
| 247 |
+
|
| 248 |
+
Table 3: Ablation Study.
|
| 249 |
+
|
| 250 |
+
a summary, resulting in decreases of 4.99 points, 1.19 points, and 2.98 points in ROUGE-1, ROUGE-2, and ROUGE-L, respectively. This indicates that the generator plays a crucial role in the overall quality of the summary. Finally, when the consistency loss module is removed (i.e.w/o consistency), ROUGE-1, ROUGE-2, and ROUGE-L decrease by 0.35 points, 0.23 points, and 0.17 points, respectively. This suggests that consistency loss helps optimize the extraction quality of the extractor.
|
| 251 |
+
|
| 252 |
+
# 5.5 Parameter sensitivity
|
| 253 |
+
|
| 254 |
+
We also explored the impact of extracting different numbers of key sentences (i.e., different $K$ of top- $K$ ) on model performance in the MLMD-news dataset. As shown in Figure 6, the ROUGE score increases with the increase in $K$ , reaching a peak at $K = 10$ . However, when $K$ exceeds 10, the ROUGE score begins to decline, possibly due to the introduction of noise information from including too many sentences.
|
| 255 |
+
|
| 256 |
+
# 6 Conclusion
|
| 257 |
+
|
| 258 |
+
In this paper, we constructed the first mixed-language multi-document news summarization dataset (MLMD-news) and proposed a graph-based extract-generate model specifically designed for the MLMD news summarization task. We conducted benchmark tests on the MLMD-news dataset, evaluating our proposed method along with advanced methods such as LLM. Additionally, we have publicly released the dataset and code, hoping to foster further development in the MLMD news summarization area.
|
| 259 |
+
|
| 260 |
+
# Limitations
|
| 261 |
+
|
| 262 |
+
Although our method demonstrates significant performance advantages in the mixed-language multi-document summarization task, due to the limitations of GPU performance, we set the maximum number of sentence extractions (top- $K$ ) in our experiments to 13. Increasing the top- $K$ value further might improve the model's optimal performance, but this hypothesis has yet to be validated. Moreover, the mixed-language dataset we constructed currently primarily includes rich-resource languages such as German and English, with a limited number of languages involved. Future work could extend to more languages, especially low-resource ones, to further verify the method's applicability and generalization capability.
|
| 263 |
+
|
| 264 |
+
# Ethical Considerations
|
| 265 |
+
|
| 266 |
+
Our MLMD-news dataset is built on the publicly available multi-document summarization dataset Multi-News, through translation and filtering processes. During the construction of the dataset, we strictly adhered to academic ethical guidelines, respected data privacy and related rights, and ensured that the use of the data complied with ethical standards. At the same time, we implemented rigorous procedures and standards to guarantee the transparency and reliability of data processing, thus supporting credible research outcomes.
|
| 267 |
+
|
| 268 |
+
# References
|
| 269 |
+
|
| 270 |
+
Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. 2023. Gpt-4 technical report. arXiv preprint arXiv:2303.08774.
|
| 271 |
+
Akiko Aizawa. 2003. An information-theoretic perspective of tfidf measures. Information Processing & Management, 39(1):45-65.
|
| 272 |
+
Florian Boudin, Stéphane Huet, and Juan-Manuel Torres-Moreno. 2011. A graph-based approach to cross-language multi-document summarization. Polibits, (43):113-118.
|
| 273 |
+
Yuang Cai and Yuyu Yuan. 2024. Car-transformer: Cross-attention reinforcement transformer for crosslingual summarization. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 38, pages 17718-17726.
|
| 274 |
+
Jaime Carbonell and Jade Goldstein. 1998. The use of mmr, diversity-based reranking for reordering documents and producing summaries. In Proceedings
|
| 275 |
+
|
| 276 |
+
of the 21st annual international ACM SIGIR conference on Research and development in information retrieval, pages 335-336.
|
| 277 |
+
Asli Celikyilmaz and Dilek Hakkani-Tur. 2010. A hybrid hierarchical model for multi-document summarization. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 815-824.
|
| 278 |
+
Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle, Aiesha Letman, Akhil Mathur, Alan Schelten, Amy Yang, Angela Fan, et al. 2024. The llama 3 herd of models. arXiv preprint arXiv:2407.21783.
|
| 279 |
+
Günes Erkan and Dragomir R Radev. 2004. Lexrank: Graph-based lexical centrality as salience in text summarization. Journal of artificial intelligence research, 22:457-479.
|
| 280 |
+
Alexander R Fabbri, Irene Li, Tianwei She, Suyi Li, and Dragomir R Radev. 2019. Multi-news: A large-scale multi-document summarization dataset and abstractive hierarchical model. arXiv preprint arXiv:1906.01749.
|
| 281 |
+
Luciano Floridi and Massimo Chiriatti. 2020. Gpt-3: Its nature, scope, limits, and consequences. *Minds and Machines*, 30:681–694.
|
| 282 |
+
Giacomo Frisoni, Paolo Italiani, Stefano Salvatori, and Gianluca Moro. 2023. Cogito ergo summ: abstractive summarization of biomedical papers via semantic parsing graphs and consistency rewards. In Proceedings of the AAAI conference on artificial intelligence, volume 37, pages 12781-12789.
|
| 283 |
+
Alireza Ghadimi and Hamid Beigy. 2022. Hybrid multi-document summarization using pre-trained language models. Expert Systems with Applications, 192:116292.
|
| 284 |
+
George Giannakopoulos. 2013. Multi-document multilingual summarization and evaluation tracks in acl 2013 multiling workshop. In Proceedings of the multiling 2013 workshop on multilingual multi-document summarization, pages 20-28.
|
| 285 |
+
Aria Haghighi and Lucy Vanderwende. 2009. Exploring content models for multi-document summarization. In Proceedings of human language technologies: The 2009 annual conference of the North American Chapter of the Association for Computational Linguistics, pages 362-370.
|
| 286 |
+
Hanqi Jin and Xiaojun Wan. 2020. Abstractive multi-document summarization via joint learning with single-document summarization. In *Findings of the Association for Computational Linguistics: EMNLP* 2020, pages 2545-2554.
|
| 287 |
+
Akanksha Joshi, Eduardo Fidalgo, Enrique Alegre, and Laura Fernández-Robles. 2023. Deepsumm: Exploiting topic models and sequence to sequence networks for extractive text summarization. Expert Systems with Applications, 211:118442.
|
| 288 |
+
|
| 289 |
+
Thang Le. 2024. Cross-lingual summarization with pseudo-label regularization. In *Findings of the Association for Computational Linguistics: NAACL* 2024, pages 4644–4677.
|
| 290 |
+
Elvys Linhares Pontes, Stéphane Huet, Juan-Manuel Torres-Moreno, and Andréa Carneiro Linhares. 2018. Cross-language text summarization using sentence and multi-sentence compression. In *Natural Language Processing and Information Systems: 23rd International Conference on Applications of Natural Language to Information Systems*, NLDB 2018, Paris, France, June 13-15, 2018, Proceedings 23, pages 467-479. Springer.
|
| 291 |
+
Marina Litvak and Mark Last. 2008. Graph-based keyword extraction for single-document summarization. In *Coling* 2008: Proceedings of the workshop multisource multilingual information extraction and summarization, pages 17-24.
|
| 292 |
+
Shuaiqi Liu, Jiannong Cao, Ruosong Yang, and Zhiyuan Wen. 2021. Highlight-transformer: Leveraging key phrase aware attention to improve abstractive multi-document summarization. In *Findings of the Association for Computational Linguistics: ACL-IJCNLP* 2021, pages 5021-5027.
|
| 293 |
+
Yang Liu and Mirella Lapata. 2019. Text summarization with pretrained encoders. arXiv preprint arXiv:1908.08345.
|
| 294 |
+
Ziming Mao, Chen Henry Wu, Ansong Ni, Yusen Zhang, Rui Zhang, Tao Yu, Budhaditya Deb, Chenguang Zhu, Ahmed H Awadallah, and Dragomir Radev. 2021. Dyle: Dynamic latent extraction for abstractive long-input summarization. arXiv preprint arXiv:2110.08168.
|
| 295 |
+
Laura Mascarell, Rabin Chalumattu, and Julien Heitmann. 2023. Entropy-based sampling for abstractive multi-document summarization in low-resource settings. In 16th International Natural Language Generation Conference (INGL 2023).
|
| 296 |
+
Jian-Ping Mei and Lihui Chen. 2012. Sumcr: A new subtopic-based extractive approach for text summarization. Knowledge and information systems, 31:527-545.
|
| 297 |
+
Rada Mihalcea and Paul Tarau. 2004. Textrank: Bringing order into text. In Proceedings of the 2004 conference on empirical methods in natural language processing, pages 404-411.
|
| 298 |
+
Paul Over and James Yen. 2004. An introduction to duc-2004. National Institute of Standards and Technology.
|
| 299 |
+
Dragomir R Radev, Hongyan Jing, Malgorzata Stys, and Daniel Tam. 2004. Centroid-based summarization of multiple documents. Information Processing & Management, 40(6):919-938.
|
| 300 |
+
|
| 301 |
+
Yun-Zhu Song, Yi-Syuan Chen, and Hong-Han Shuai. 2022. Improving multi-document summarization through referenced flexible extraction with credit-awareness. arXiv preprint arXiv:2205.01889.
|
| 302 |
+
Krysta Svore, Lucy Vanderwende, and Christopher Burges. 2007. Enhancing single-document summarization by combining ranknet and third-party sources. In Proceedings of the 2007 joint conference on empirical methods in natural language processing and computational natural language learning (EMNLP-CoNLL), pages 448-457.
|
| 303 |
+
Yuqing Tang, Chau Tran, Xian Li, Peng-Jen Chen, Naman Goyal, Vishrav Chaudhary, Jiatao Gu, and Angela Fan. 2020. Multilingual translation with extensible multilingual pretraining and finetuning. arXiv preprint arXiv:2008.00401.
|
| 304 |
+
Petar Velicković, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. 2017. Graph attention networks. arXiv preprint arXiv:1710.10903.
|
| 305 |
+
Xiaojun Wan. 2011. Using bilingual information for cross-language document summarization. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 1546-1555.
|
| 306 |
+
Xiaojun Wan, Ziqiang Cao, Furu Wei, Sujian Li, and Ming Zhou. 2015. Multi-document summarization via discriminative summary reranking. arXiv preprint arXiv:1507.02062.
|
| 307 |
+
Xiaojun Wan, Huiying Li, and Jianguo Xiao. 2010. Cross-language document summarization based on machine translation quality prediction. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 917-926.
|
| 308 |
+
Xiaojun Wan, Fuli Luo, Xue Sun, Songfang Huang, and Jin-ge Yao. 2019. Cross-language document summarization via extraction and ranking of multiple summaries. Knowledge and Information Systems, 58:481-499.
|
| 309 |
+
Danqing Wang, Pengfei Liu, Yining Zheng, Xipeng Qiu, and Xuanjing Huang. 2020. Heterogeneous graph neural networks for extractive document summarization. arXiv preprint arXiv:2004.12393.
|
| 310 |
+
Dingding Wang, Shenghuo Zhu, Tao Li, and Yihong Gong. 2009. Multi-document summarization using sentence-based topic models. In Proceedings of the ACL-IJCNLP 2009 conference short papers, pages 297-300.
|
| 311 |
+
Wen Xiao, Iz Beltagy, Giuseppe Carenini, and Arman Cohan. 2022. PRIMERA: pyramid-based masked sentence pre-training for multi-document summarization. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2022, Dublin, Ireland, May 22-27, 2022, pages 5245-5263. Association for Computational Linguistics.
|
| 312 |
+
|
| 313 |
+
L Xue. 2020. mt5: A massively multilingual pretrained text-to-text transformer. arXiv preprint arXiv:2010.11934.
|
| 314 |
+
Michihiro Yasunaga, Rui Zhang, Kshitijh Meelu, Ayush Parek, Krishnan Srinivasan, and Dragomir Radev. 2017. Graph-based neural multi-document summarization. arXiv preprint arXiv:1706.06681.
|
| 315 |
+
Jingqing Zhang, Yao Zhao, Mohammad Saleh, and Peter J. Liu. 2019. Pegasus: Pre-training with extracted gap-sentences for abstractive summarization. Preprint, arXiv:1912.08777.
|
| 316 |
+
Junnan Zhu, Qian Wang, Yining Wang, Yu Zhou, Jiajun Zhang, Shaonan Wang, and Chengqing Zong. 2019. NCLS: Neural cross-lingual summarization. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3054–3064, Hong Kong, China. Association for Computational Linguistics.
|
| 317 |
+
Markus Zopf. 2018. Auto-hmds: Automatic construction of a large heterogeneous multilingual multi-document summarization corpus. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018).
|
2025/A Mixed-Language Multi-Document News Summarization Dataset and a Graphs-Based Extract-Generate Model/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:804fc82b7685a3a0411804f9f9bfc3896742d4dcc5fbf664f511b6f9e9ce02e4
|
| 3 |
+
size 533401
|
2025/A Mixed-Language Multi-Document News Summarization Dataset and a Graphs-Based Extract-Generate Model/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2025/A Multi-modal Large Language Model with Graph-of-Thought for Effective Recommendation/73b2862f-4128-4c3f-916f-50e750cc071e_content_list.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2025/A Multi-modal Large Language Model with Graph-of-Thought for Effective Recommendation/73b2862f-4128-4c3f-916f-50e750cc071e_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2025/A Multi-modal Large Language Model with Graph-of-Thought for Effective Recommendation/73b2862f-4128-4c3f-916f-50e750cc071e_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:97ec56d1e61b872c454d25548f65d6a3f7cd0a31e53b102ee5aa171d4173c5e8
|
| 3 |
+
size 2327711
|
2025/A Multi-modal Large Language Model with Graph-of-Thought for Effective Recommendation/full.md
ADDED
|
@@ -0,0 +1,442 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# A Multi-modal Large Language Model with Graph-of-Thought for Effective Recommendation
|
| 2 |
+
|
| 3 |
+
Zixuan Yi
|
| 4 |
+
|
| 5 |
+
University of Glasgow
|
| 6 |
+
|
| 7 |
+
z.yi.1@research.gla.ac.uk
|
| 8 |
+
|
| 9 |
+
Iadh Ounis
|
| 10 |
+
|
| 11 |
+
University of Glasgow
|
| 12 |
+
|
| 13 |
+
iadh.ounis@glasgow.ac.uk
|
| 14 |
+
|
| 15 |
+
# Abstract
|
| 16 |
+
|
| 17 |
+
Chain-of-Thought (CoT) prompting has been shown to be effective in guiding Large Language Models (LLMs) to decompose complex tasks into multiple intermediate steps, and constructing a rational reasoning chain for inferring answers. However, the linear nature of CoT falls short from enabling LLMs to effectively handle graph structures, which are essential for personalised recommendation tasks that rely on user-item interaction graphs. To bridge this gap, we propose GollaRec, which leverages a new Graph-of-Thought (GoT) prompting technique in a Multi-modal LLM, namely LLaVA, to effectively exploit the complex structure of the interaction graphs. GollaRec enhances the recommendation effectiveness by integrating both visual and textual "thoughts" into a graph-structured prompt, using both item images and descriptions to produce richer multi-modal user/item representations. In our proposed approach, GollaRec leverages text-graph alignment and graph instruction tuning to allow the Multi-modal LLM to capture complex graph structures. In addition, GollaRec leverages a graph adaptor to integrate user-item interactions into the resulting user/item embeddings, therefore effectively adapting the model to the recommendation task. Our extensive experiments on 6 benchmark datasets demonstrate the superiority of our proposed GollaRec model over 12 existing state-of-the-art models in various multi-modal recommendation tasks, including general and multi-domain recommendation tasks.
|
| 18 |
+
|
| 19 |
+
# 1 Introduction
|
| 20 |
+
|
| 21 |
+
Large Language Models (LLMs) have demonstrated a remarkable capability in language understanding and text generation in various real-world scenarios (Touvron et al., 2023; Jiang et al., 2024). However, their training on unstructured data often limits their capability in handling complex tasks necessitating complex, multi-step reasoning
|
| 22 |
+
|
| 23 |
+
or a precise contextual understanding (Lei et al., 2023). This limitation becomes particularly important with graph-structured data, which is essential in fields such as social network analysis, drug discovery, and notably recommender systems (Guo et al., 2023). Indeed, graph data, with its complex relational structures between node entities, poses a unique challenge for LLMs, which typically do not encounter structured data formats like column-indexed records during their pre-training, leading to difficulties in handling domain-specific knowledge inherent to such data (Yu et al., 2023).
|
| 24 |
+
|
| 25 |
+
Recent advances in prompting techniques have enhanced the LMMs' capability to address complex reasoning tasks (Jin et al., 2022). For example, Brown et al. (2020) employed few-shot in-context learning to enhance the reasoning capabilities of an LLM by using input and output examples as prompts. Wei et al. (2022) and Wang et al. (2022b) enhanced the LLMs' effectiveness by using Chain-of-Thought (CoT) prompting, which involves a series of demonstrations where each step of the detailed, step-by-step explanation, serves as an instructive example to guide reasoning processes. Despite its effectiveness in linear textual reasoning, CoT does not inherently extend to tasks involving structural graph data, which necessitate mining complex relational structures (Jiang et al., 2023). This identified limitation, henceforth denoted as insufficient graph mining, emphasises the need for enhancing LLMs to effectively tackle graph-related tasks. The graph data, especially user-item interaction graphs in recommender systems, encapsulates unique patterns that contain domain-specific knowledge. To bridge this gap, we integrate graph data into the prompt by linearising the structured data into textual sentences, thereby addressing the insufficient graph mining problem in CoT.
|
| 26 |
+
|
| 27 |
+
In addition to the problem of insufficient graph mining in CoT, the language-based reasoning process is often overly complex and abstract (Huang
|
| 28 |
+
|
| 29 |
+
# Chain-of-Thought
|
| 30 |
+
|
| 31 |
+
prompt = "You are a product recommendation system and required to recommend user with product based on user historical items <historical items> that each product with title (similar title), category (similar category) and description (similar description). Please output the user preference by ranking the index of the user's candidate items <candidate items>, but not user historical interacted items."
|
| 32 |
+
|
| 33 |
+
# Prompt with CoT
|
| 34 |
+
|
| 35 |
+
cot steps = []
|
| 36 |
+
|
| 37 |
+
"Let's think about the recommended item list in a step-by-step manner.
|
| 38 |
+
|
| 39 |
+
"Consider key attributes—titles, categories, descriptions of each candidate item from <historical items>.," "Then, consider the similarity of each candidate item to the historical attributes."
|
| 40 |
+
|
| 41 |
+
"Then, consider to rank candidate items based on their similarity to historical preferences and list each item by index.", "The final ranking list is derived from the candidate items $<$ candidate items $>$ ".
|
| 42 |
+
|
| 43 |
+
1
|
| 44 |
+
|
| 45 |
+
Combine prompt and CoT steps.
|
| 46 |
+
|
| 47 |
+
prompt_with_cot = "\n".join([prompt] + cot_steps)
|
| 48 |
+
|
| 49 |
+
# Graph-of-Thought
|
| 50 |
+
|
| 51 |
+
prompt_text = "You are a product recommendation system and required to recommend user with product based on user historical items <historical items> that each product with title (similar title), category (similar category) and description (similar description). Please output the user preference by ranking the index of the user's candidate items <candidate items>, but not user historical interacted items."
|
| 52 |
+
|
| 53 |
+
prompt image $\equiv$ last purchased item image<image>
|
| 54 |
+
|
| 55 |
+
Prompt with GoT
|
| 56 |
+
|
| 57 |
+
got steps = []
|
| 58 |
+
|
| 59 |
+
"Let's think about the recommended item list in a step-by-step manner."
|
| 60 |
+
|
| 61 |
+
"Consider key attributes—titles, categories, descriptions of each candidate item from <historical items> and a user-item graph <graph>.
|
| 62 |
+
|
| 63 |
+
"Then, consider to review an initial ranking list <initial ranking items> generated using a graph recommender."
|
| 64 |
+
|
| 65 |
+
"Then, consider the semantic similarity between the last purchased item and the items in the candidate set <candidate items> in both visual and textual modality."
|
| 66 |
+
|
| 67 |
+
"The final ranking list is derived from the candidate items <candidate items>.
|
| 68 |
+
|
| 69 |
+
Combine prompt and GoT steps
|
| 70 |
+
|
| 71 |
+
prompt_with_cot = "\n".join(['prompt_text + prompt_image] + got_steps)
|
| 72 |
+
|
| 73 |
+
Figure 1: Chain-of-thought prompting and our proposed graph-of-thought prompting in the recommendation task.
|
| 74 |
+
|
| 75 |
+
and Chang, 2023). Instead, the use of the item images can be an intuitive medium in recommendation tasks. Indeed, integrating an image in a prompt not only enriches the modelling of the user profiles but also enables the used model to generate more coherent outputs. In this paper, we use a Multi-modal Large Language Model (MLLM), namely LLaVA (Liu et al., 2024), to ensure that the user/item embeddings are semantically coherent. To prompt effective reasoning in presence of multi-modality and a graph structure, we propose a multi-modal Graph-of-Thought (GoT) prompting technique, as illustrated in Figure 1, which can be used in two distinct recommendation tasks: a general recommendation task and a multi-domain recommendation task. To enable an effective understanding of the graph knowledge in the used MLLM, we first perform a text-graph alignment to align the item textual embeddings and the corresponding item node embeddings in the semantic space and perform a graph instruction tuning on the MLLM so as to match each graph token with its textual description. On the other hand, recent recommendation approaches employing LLMs for various recommendation tasks encounter a key challenge, namely the constraint of input token length in these LLMs (Ren et al., 2024; Wei et al., 2024). This limitation also restricts the amount of graph information that can be integrated within a GoT prompt. Hence, in order to adequately integrate sufficient graph information within a GoT, we pre-train a recommender (He et al., 2020), specifically designed to select the maximum number of high-potential items within the input token limit.
|
| 76 |
+
|
| 77 |
+
Our contributions can be summarised as follows: (1) We propose Graph-of-thought LLaVA for Recommendation (GollaRec), a new recommendation model integrating the user-item graph information within Graph-of-Thought (GoT). To the best of our knowledge, this is the first work to apply GoT prompting specifically tailored to
|
| 78 |
+
|
| 79 |
+
enhance multi-modal recommendation tasks, so as to effectively address the problem of insufficient graph mining in CoT; (2) To effectively integrate the user-item graph information within GoT, we use a text-graph alignment method and graph instruction tuning to effectively capture the graph patterns; (4) To address the limited token length problem in the MLLM, we use a pre-trained recommender to feed a maximum number of high-potential items into GoT; (5) Our extensive experiments on six benchmark datasets show that GollaRec significantly outperforms 12 strong baselines across two recommendation tasks.
|
| 80 |
+
|
| 81 |
+
# 2 GollaRec
|
| 82 |
+
|
| 83 |
+
We first describe in Section 2.1 the two top- $k$ multimodal recommendation tasks we tackle in this paper, namely a general recommendation task and a multi-domain recommendation task. Section 2.2 presents our proposed GoT technique, which includes three key parts (adaptive graph truncation, text-graph alignment and text-image alignment). Next, we introduce the architecture of GollaRec in Section 2.3. The model is illustrated in Figure 2.
|
| 84 |
+
|
| 85 |
+
# 2.1 Multi-modal Recommendation Tasks
|
| 86 |
+
|
| 87 |
+
In this paper, we address two specific recommendation tasks that leverage the capabilities of a Multimodal Large Language Model (MLLM) alongside a graph adaptor to process and integrate diverse data types. Each task includes user and item sets, denoted as $\mathcal{U} = \{u\}$ and $I = \{i\}$ respectively, with embeddings $\mathrm{X} \in \mathbb{R}^{d \times (|U| + |I|)}$ where $d$ is the dimensionality of these embeddings. Then, we denote the items' multi-modal embeddings as $\mathbf{X}_{i,m} \in \mathbb{R}^{d_m \times |I|}$ , where $d_m$ is the dimension of that modality's embedding, $m \in \mathcal{M}$ is the set of modalities where $\mathcal{M} = \{v, t\}$ , with $v$ and $t$ representing the visual and textual modalities<sup>1</sup>, respectively.
|
| 88 |
+
|
| 89 |
+

|
| 90 |
+
Figure 2: The architecture of our GollaRec model.
|
| 91 |
+
|
| 92 |
+
The users' historical behaviour data is denoted by $\mathcal{R} \in \{0,1\}^{|U| \times |I|}$ , where each entry $\mathcal{R}_{u,i} = 1$ if the user $u$ clicked item $i$ , otherwise $R_{u,i} = 0$ . Using the historical interaction data, we construct an interaction graph $\mathcal{G} = \{\mathcal{V}, \mathcal{E}\}$ with $\mathcal{V} = \{\mathcal{U} \cup I\}$ and $\mathcal{E} = \{(u,i) | u \in \mathcal{U}, i \in I, \mathcal{R}_{ui} = 1\}$ .
|
| 93 |
+
|
| 94 |
+
Task 1 - General Recommendation: For the general recommendation task, given an interaction graph $\mathcal{G}$ , the item descriptions and the item images, we aim to estimate the user preferences through an MLLM $f_{\theta_1}$ and an adaptor $f_{\theta_2}$ that recommend the top- $k$ items for a target user $u$ .
|
| 95 |
+
|
| 96 |
+
Task 2 - Multi-domain Recommendation: The multi-domain recommendation task extends the application of the MLLM $f_{\theta_1}$ and the adaptor $f_{\theta_2}$ across multiple source domains $\{D_1, \ldots, D_n\}$ to a new target domain $D_{n+1}$ . The goal of this task is to rank the top- $k$ items for a target user $u$ in a new target domain $D_{n+1}$ . Note that, in this task, we do not require overlapping users between the domains.
|
| 97 |
+
|
| 98 |
+
# 2.2 Graph-of-Thought
|
| 99 |
+
|
| 100 |
+
As discussed in Section 1, LLMs often struggle with unfamiliar patterns and structures in graph data, thereby impeding these LLMs from generating accurate and coherent responses in graph-related tasks. To tackle this problem of insufficient graph mining, it is important to enhance the LLM's ability to interpret the interactions with the user-item graph. This necessitates a step-by-step demonstration of reasoning within the task. In this paper, we propose a Graph-of-Thought (GoT) prompting technique, specifically designed to provide a structured rationale that delineates the reasoning steps needed for the recommendation tasks. Specifically, this GoT technique prompts the MLLM to reason about the potential candidates on the user-item interaction graph, taking into account the semantic similarity derived from both the visual and textual item embeddings in order to determine the final ranking list of the target user. Figure 1 (right) shows our proposed GoT technique in action when
|
| 101 |
+
|
| 102 |
+
addressing two multi-modal recommendation tasks. However, effectively incorporating graph and multimodal data for an MLLM empowered with a GoT technique presents several challenges:
|
| 103 |
+
|
| 104 |
+
- C1. How to integrate a large volume of textual nodes within a fixed input token length?
|
| 105 |
+
C2. How to enable the MLLM to effectively model the relationship between the item images and their descriptions?
|
| 106 |
+
- C3. How to facilitate the MLLM's understanding of graph patterns in the user-item graph?
|
| 107 |
+
|
| 108 |
+
In the following, we propose solutions to address these challenges, namely adaptive graph truncation, text-image alignment and text-graph alignment.
|
| 109 |
+
|
| 110 |
+
Adaptive Graph Truncation. In our initial experiments, we found that the used MLLM, namely LLaVA (Vicuna-7B) $^2$ , configures 576 visual tokens within its overall input token length limit (2048). As outlined in challenge C1, such employed amount of the total input token length markedly reduces the remaining available token length, thereby impeding the model's capability to encode richer user-item graph information. To address this limitation and ensure an effective user/item modelling, we leverage a pre-trained recommender to produce an initial candidate ranking list of items for the target user. We aim to maintain the most potential candidate items in both recommendation tasks. In particular, we append the descriptions of these highly potential candidate items to the GoT prompt, and adaptively truncate this list so that it fits within the restricted token limit. Algorithm 1 presents our method for addressing the limited input token length of LLaVA. Figure 3 illustrates our adaptive truncation method applied within our GoT prompt.
|
| 111 |
+
|
| 112 |
+
Text-image Alignment. To address challenge C2 and enhance the MLLM's understanding of the relationships between multiple modalities, we use a
|
| 113 |
+
|
| 114 |
+
Algorithm 1 Adaptive Graph Truncation in GoT
|
| 115 |
+
1: Input: User ID, Item ID and Descriptions, Max Tokens = 2048
|
| 116 |
+
2: Output: Truncated Item List
|
| 117 |
+
3: Initialise a pre-trained recommender (e.g., LightGCN)
|
| 118 |
+
4: # Generate initial ranking
|
| 119 |
+
5: items_list ← Recommender.RankItems(User ID, Item ID)
|
| 120 |
+
6: # Reserve tokens for visual data
|
| 121 |
+
7: total_tokens ← 576
|
| 122 |
+
8: Initialise initial_list as empty
|
| 123 |
+
9: for each item in items_list do
|
| 124 |
+
10: description ← GetDescription(item)
|
| 125 |
+
11: tokens ← Tokenise(description)
|
| 126 |
+
12: if total_tokens + length(tokens) ≤ Max Tokens then
|
| 127 |
+
13: append description to initial_list
|
| 128 |
+
14: total_tokens ← total_tokens + length(tokens)
|
| 129 |
+
15: else
|
| 130 |
+
16: break
|
| 131 |
+
17: end if
|
| 132 |
+
18: end for
|
| 133 |
+
19: return initial_list
|
| 134 |
+
|
| 135 |
+
contrastive pre-training method to pre-train LLaVA using all available item image-text pairs. Yi et al. (2024a) also showed that fine-tuning MLLMs with image-text pairs significantly enhances the recommendation performance. Therefore, we use item images and their descriptions as inputs to pre-train LLaVA with an Image-Text Contrastive (ITC) loss. This pre-training method aims to maximise the similarity between the items' image and description pairs while minimising the similarity between the mismatched pairs, thereby facilitating a unified joint embedding space for multi-modal inputs within GoT. The ITC loss is defined as follows:
|
| 136 |
+
|
| 137 |
+
$$
|
| 138 |
+
\mathcal {L} _ {\mathrm {I T C}} = - \frac {1}{B} \sum_ {p = 1} ^ {N} \log \frac {\exp \left(\sin \left(v _ {p} , t _ {p}\right) / \tau\right)}{\sum_ {q \neq p} \exp \left(\sin \left(v _ {p} , t _ {q}\right) / \tau\right)} \tag {1}
|
| 139 |
+
$$
|
| 140 |
+
|
| 141 |
+
where $B$ is the batch size, $v_{k}$ and $t_k$ are the visual and textual embeddings of the $p$ -th item, $q$ represents a negative item index, sim is a similarity function using cross-entropy (Zhang and Sabuncu, 2018), and $\tau$ is a temperature parameter. As such, LLaVA learns the relationships between the item descriptions and their corresponding images, hence enriching GoT with contextualised information.
|
| 142 |
+
|
| 143 |
+
Text-graph Alignment. To address challenge C3 - enhancing the understanding of graph structural information by the MLLM - we focus on aligning the encoding of the graph structures with the natural language space. This alignment enables the used MLLM to effectively capture the structural patterns using their language understanding capabilities. Inspired by prior works about aligning text and graph data for the node classification task (Wen and Fang, 2023; Tang et al., 2024), we employ a text-graph grounding method and a graph instruction tuning method to maintain the graph's
|
| 144 |
+
|
| 145 |
+
structural context within the used MLLM in the recommendation scenarios.
|
| 146 |
+
|
| 147 |
+
(1) Text-graph grounding: Following (Wen and Fang, 2023), we use a text encoder (namely BERT (Devlin et al., 2019)) and a graph encoder (namely a graph transformer (Yun et al., 2019)) to align their resulting item node embeddings $z_{1}$ and the item textual embeddings $z_{2}$ . We input item descriptions into both encoders to generate these embeddings, aligning them within a unified semantic space. Similar to Equation (1), we use a text-node contrastive loss by (Wen and Fang, 2023) to differentiate between the node-text matching pair $(z_{1p}, z_{2p})$ as a positive pair and the non-matching pair $(z_{1p}, z_{2q})$ as a negative pair. As a result, we use this contrastive loss to refine this alignment, thereby preparing the well-trained graph encoder for the following instruction tuning.
|
| 148 |
+
|
| 149 |
+
(2) Graph instruction tuning: Following the text-graph grounding phase, we use the pre-trained graph encoder to project the node embeddings into graph tokens using a Multi-Layer Perceptron (MLP): $\hat{z}_1 = \mathrm{MLP}(z_1)$ , where are $z_1$ the node embeddings derived from the graph encoder and $\hat{z}_1$ represents the resulting graph tokens. These graph tokens, which represent graph structures, allow the MLLM to process and interpret the graph-structured data, thereby enabling the MLLM's understanding of graph patterns in the interaction graph. Inspired by GraphGPT's (Tang et al., 2024) methodology, we adapt a graph matching task to the context of recommender systems during this instruction tuning phase. Specifically, we construct the instruction by selecting a central item node and its $l$ neighboring nodes, presenting these nodes as a sequence of graph tokens (<graph_start>, <graph_token>1, <graph_token>2, ..., <graph_token>_l, <graph_end>). The goal of this matching task is to differentiate and match graph tokens with the corresponding language tokens using an MLLM. We input the projected graph tokens $\hat{z}_1$ and the instruction's textual embeddings $z_3$ , for a given sequence of length $l$ . We then compute the probability of generating the target output $\mathbf{x}_o$ as follows:
|
| 150 |
+
|
| 151 |
+
$$
|
| 152 |
+
\psi \left(x _ {o} \mid \hat {z} _ {1}, z _ {3}\right) = \prod_ {j = 1} ^ {l} \psi_ {\theta_ {2}} \left(x _ {j} \mid \hat {z} _ {1}, z _ {3}\right) \tag {2}
|
| 153 |
+
$$
|
| 154 |
+
|
| 155 |
+
where $\theta_{2}$ are the learnable parameters within GollaRec, and $\psi_{\theta_2}$ is the probability of the $j$ -th
|
| 156 |
+
|
| 157 |
+

|
| 158 |
+
Figure 3: Our proposed GoT for recommendation.
|
| 159 |
+
|
| 160 |
+
token $x_{j}$ . Moreover, we optimise the MLLM's performance in matching the shuffled list of language tokens to the ordered sequence of graph tokens using a cross-entropy loss. As such, we enhance the understanding of graph structural information by the MLLM, thereby addressing the insufficient graph mining problem.
|
| 161 |
+
|
| 162 |
+
# 2.3 Model Architecture
|
| 163 |
+
|
| 164 |
+
Figure 2 provides a detailed overview of the GollaRec model architecture, including the GoT technique to prompt the LLaVA model as input and an adapter, namely LightGCN (He et al., 2020), to propagate the resulting embeddings from the MLLM's last layer and output final embeddings for ranking. We obtain the final user embeddings using the adapter as follows: $h_u = \sum_{i \in \mathcal{N}_u} \frac{h_i}{\sqrt{|\mathcal{N}_i| |\mathcal{N}_u|}}$ , where $\mathcal{N}_i$ and $\mathcal{N}_u$ denote the set of neighbours for user $u$ and item $i$ , respectively, while $|\mathcal{N}_u|$ and $|\mathcal{N}_i|$ represent the size of $\mathcal{N}_u$ and $\mathcal{N}_i$ . Analogously, we also obtain the item embeddings. Our GollaRec model leverages two types of input, with the textual input encompassing the item descriptions and the visual input including the item images. During the training stage, the text-graph and text-image alignment methods enable LLaVA to capture graph structures and the relationships between the item images and descriptions. For the inference stage, we append the resulting GoT to our GollaRec model's input so as to generate the corresponding user/item embeddings, as shown in Figrue 2. Then, we use a graph adapter to integrate the user-item interactions, thereby refining the final user/item embeddings for the top- $k$ multimodal recommendation tasks, so as to address the problem of insufficient graph mining in the current prompting techniques.
|
| 165 |
+
|
| 166 |
+
# 3 Experiments
|
| 167 |
+
|
| 168 |
+
# 3.1 Experimental Setup
|
| 169 |
+
|
| 170 |
+
Datasets. In order to evaluate the effectiveness of our GollaRec model across multi-modal general
|
| 171 |
+
|
| 172 |
+
and multi-domain recommendation tasks, we conduct experiments on six commonly used datasets – three are focused on general recommendations and three on multi-domain recommendations. For general recommendations, we use two Amazon Review datasets (He and McAuley, 2016a) – Clothing and Baby datasets – and the HM fashion recommendation dataset (Xian et al., 2023). For multi-domain recommendations, we conduct experiments across seven domains derived from the Amazon Review datasets. We use the Food, Home, Clothing, and Office datasets as the source domain datasets and evaluate GollaRec and the used baselines on three target datasets, namely Pantry, Electronics and Sports. We choose these datasets for their extensive user-item interactions and rich multi-modal data, which include images and detailed textual descriptions such as titles, categories, and brands (Yi and Ounis, 2025; Yi et al., 2023a). Table 5 in Appendix A presents the statistics of the used datasets.
|
| 173 |
+
|
| 174 |
+
Evaluation. Following the evaluation setting in (Zhang et al., 2021a; Yi et al., 2023b), we randomly split the datasets into training, validation, and testing sets using an 8:1:1 ratio. We optimise the hyper-parameters of both our GollaRec model and the baseline models using a grid search on the validation set. We use two commonly used evaluation metrics, namely Recall@k and NDCG@k, to examine the top-k recommendation performance for both the general and multi-domain recommendation tasks. We set k to 20 (Zhang et al., 2021a; Yi et al., 2024b), and report the average performance achieved for all users in the test set. All used baselines and our GollaRec model are implemented with PyTorch and were run on two GPU A6000s with 96GB memory. We report the detailed hyperparameter settings unique to GollaRec across all six datasets, including the batch size, learning rate, epochs, maximum token length, warmup ratio and weight decay in Appendix B. Our source code and model checkpoints are publicly available at: https://github.com/zxy-m184/GollaRec.
|
| 175 |
+
|
| 176 |
+
Baselines. To examine the effectiveness of our GollaRec model in the multi-modal general recommendation task, we compare GollaRec against 9 existing state-of-the-art models, categorised into three groups: (1) General recommenders: Light-GCN (He et al., 2020); (2) Multi-modal recommenders: VBPR (He and McAuley, 2016b), MMGCL (Yi et al., 2022), BM3 (Zhou et al., 2023); MLLM methods: CLIP (Radford et al.,
|
| 177 |
+
|
| 178 |
+
<table><tr><td>Dataset</td><td colspan="2">HM</td><td colspan="2">Clothing</td><td colspan="2">Baby</td></tr><tr><td>Methods</td><td>Recall@20</td><td>NDCG@20</td><td>Recall@20</td><td>NDCG@20</td><td>Recall@20</td><td>NDCG@20</td></tr><tr><td>LightGCN</td><td>0.1254*</td><td>0.0743*</td><td>0.0553*</td><td>0.0246*</td><td>0.0714*</td><td>0.0319*</td></tr><tr><td>VBPR</td><td>0.1108*</td><td>0.0717*</td><td>0.0611*</td><td>0.0277*</td><td>0.0740*</td><td>0.0329*</td></tr><tr><td>MMGCL</td><td>0.1633*</td><td>0.0964*</td><td>0.0607*</td><td>0.0277*</td><td>0.0790*</td><td>0.0352*</td></tr><tr><td>BM3</td><td>0.1711*</td><td>0.0981*</td><td>0.0797*</td><td>0.0358*</td><td>0.0863*</td><td>0.0380*</td></tr><tr><td>CLIP</td><td>0.0956*</td><td>0.0687*</td><td>0.0631*</td><td>0.0281*</td><td>0.0664*</td><td>0.0304*</td></tr><tr><td>BEiT-3</td><td>0.0874*</td><td>0.0661*</td><td>0.0617*</td><td>0.0265*</td><td>0.0688*</td><td>0.0311*</td></tr><tr><td>LLaVA</td><td>0.1346*</td><td>0.0910*</td><td>0.0702*</td><td>0.0315*</td><td>0.0674*</td><td>0.0316*</td></tr><tr><td>P5</td><td>0.1417*</td><td>0.0872*</td><td>0.0766*</td><td>0.0360*</td><td>0.0825*</td><td>0.0356*</td></tr><tr><td>LMRecSys</td><td>0.1269*</td><td>0.0801*</td><td>0.0623*</td><td>0.0322*</td><td>0.0778*</td><td>0.0322*</td></tr><tr><td>TALLREC</td><td>0.1145*</td><td>0.0782*</td><td>0.0632*</td><td>0.0335*</td><td>0.0752*</td><td>0.0313*</td></tr><tr><td>GollaRec-CoT</td><td>0.1807*</td><td>0.1039</td><td>0.0911*</td><td>0.0404*</td><td>0.0939*</td><td>0.0410</td></tr><tr><td>GollaRec</td><td>0.1880</td><td>0.1064</td><td>0.0932</td><td>0.0423</td><td>0.0958</td><td>0.0425</td></tr></table>
|
| 179 |
+
|
| 180 |
+
Table 1: Comparison of GollaRec with the used general recommendation baselines. * denotes a significant difference with a baseline using the Holm-Bonferroni corrected paired t-test with $p < 0.05$ .
|
| 181 |
+
|
| 182 |
+
<table><tr><td>Dataset</td><td colspan="2">Pantry</td><td colspan="2">Electronics</td><td colspan="2">Sports</td></tr><tr><td>Methods</td><td>Recall@20</td><td>NDCG@20</td><td>Recall@20</td><td>NDCG@20</td><td>Recall@20</td><td>NDCG@20</td></tr><tr><td>VBPR</td><td>0.0723*</td><td>0.0326*</td><td>0.0442*</td><td>0.0196*</td><td>0.0771*</td><td>0.0349*</td></tr><tr><td>MMGCL</td><td>0.0907*</td><td>0.0377*</td><td>0.0627*</td><td>0.0304*</td><td>0.0913*</td><td>0.0428*</td></tr><tr><td>BM3</td><td>0.0932*</td><td>0.0417*</td><td>0.0638*</td><td>0.0310*</td><td>0.0970*</td><td>0.0438*</td></tr><tr><td>CLIP</td><td>0.0683*</td><td>0.0318*</td><td>0.0461*</td><td>0.0235*</td><td>0.0727*</td><td>0.0310*</td></tr><tr><td>BEiT-3</td><td>0.0596*</td><td>0.0289*</td><td>0.0481*</td><td>0.0240*</td><td>0.0748*</td><td>0.0341*</td></tr><tr><td>LLaVA</td><td>0.0659*</td><td>0.0313*</td><td>0.0604*</td><td>0.0288*</td><td>0.0709*</td><td>0.0303*</td></tr><tr><td>MOME</td><td>0.0797*</td><td>0.0352*</td><td>0.0573*</td><td>0.0261*</td><td>0.0749*</td><td>0.0318*</td></tr><tr><td>PLE</td><td>0.0862*</td><td>0.0384*</td><td>0.0595*</td><td>0.0278*</td><td>0.0866*</td><td>0.0367*</td></tr><tr><td>MFGN</td><td>0.0891*</td><td>0.0413*</td><td>0.0623*</td><td>0.0305*</td><td>0.0894*</td><td>0.0383*</td></tr><tr><td>GollaRec (CoT)</td><td>0.1183</td><td>0.0469*</td><td>0.0655*</td><td>0.0323*</td><td>0.1046*</td><td>0.0456</td></tr><tr><td>GollaRec</td><td>0.1213</td><td>0.0495</td><td>0.0681</td><td>0.0350</td><td>0.1112</td><td>0.0502</td></tr></table>
|
| 183 |
+
|
| 184 |
+
Table 2: Comparison of GollaRec with the used multidomain baselines. * denotes a significant difference with a baseline using the Holm-Bonferroni corrected paired t-test with $p < {0.05}$ .
|
| 185 |
+
|
| 186 |
+
2021), LLaVA (Liu et al., 2024); (3) Language-based recommenders: P5 (Geng et al., 2022), LMRecSys (Zhang et al., 2021b) and TALLRec (Bao et al., 2023). For the multi-modal multi-domain recommendation task, apart from using the same multi-modal methods above, we also compare GollaRec with three multi-domain recommenders: MOME (Ma et al., 2018), PLE (Tang et al., 2020), MGFN (Zhang et al., 2022a). We describe the used baselines in Appendix A, with a summary provided in Table 6.
|
| 187 |
+
|
| 188 |
+
# 3.2 Results & Analysis
|
| 189 |
+
|
| 190 |
+
We provide our main results in this section. We also provide additional experimental results using various LLaVA variants and a qualitative case study that illustrates how GollaRec makes specific decisions in Appendix C. The same appendix also investigates GollaRec's effectiveness in a cold-start scenario, a hyper-parameter study, and an assessment of GollaRec's time efficiency.
|
| 191 |
+
|
| 192 |
+
# (RQ1): How does our proposed GollaRec model perform compared with existing recommendation models?
|
| 193 |
+
|
| 194 |
+
We compare GollaRec with all the used baselines. Table 1 and Table 2 present the comparison results
|
| 195 |
+
|
| 196 |
+
for the general and multi-domain recommendation tasks, where "GollaRec-CoT" refers to a GollaRec variant that uses CoT instead of GoT. To assess the significance of the reported performance differences between GollaRec and the baselines, we use a paired t-test with the Holm-Bonferroni correction $(\mathrm{p} < 0.05)$ .
|
| 197 |
+
|
| 198 |
+
General recommendation task: Table 1 shows the performance of our GollaRec model and the general recommendation baselines. The results highlight several key findings: (1) Our GollaRec model consistently achieves the best performance on all the used datasets. Compared with the best baseline model (namely BM3), GollaRec achieves an average improvement of $12.7\%$ across all the datasets. These results demonstrate the effectiveness of GollaRec in the general recommendation task; (2) GollaRec outperforms the MLLM models (CLIP, BEiT-3, LLaVA) by a large margin on all used datasets. The suboptimal performance of the previous MLLM models suggests their failure to effectively prompt the MLLM to a recommendation task and the absence of informative interactions in the models. Overall, our results indicate that GollaRec, which prompts an MLLM with interactions is more effective than relying solely on visual and textual similarities for recommendations; (3) When comparing GollaRec with language-based models (P5, LMRecSys, TALLRec) and multi-modal models (VBPR, MMGCL, BM3) in Table 1, we observe that our GollaRec model significantly outperforms both groups of models on the three used datasets. This indicates that our GollaRec model successfully adapts an MLLM to the recommendation task, effectively leveraging textual and visual prompts with GoT to recommend more accurate items; (4) From Table 1, we observe that our GollaRec model significantly outperforms the GollaRec-CoT variant on all three datasets. This indicates that GoT, with its integration of the user-item interaction graph as additional context, is more effective in handling the general recommendation task compared to CoT; (5) When comparing our GollaRec model with LightGCN, it is notable that our GollaRec model initialises node embeddings with MLLM-initialised embeddings, whereas LightGCN uses randomly-initialised embeddings. Our GollaRec model significantly outperforms LightGCN by a large margin across all used datasets, indicating the effectiveness of leveraging pre-trained embeddings over training from scratch.
|
| 199 |
+
|
| 200 |
+
<table><tr><td>Dataset</td><td colspan="2">General Rec (Clothing)</td><td colspan="2">Multi-domain Rec (Sports)</td></tr><tr><td>Variants</td><td>Recall@20</td><td>NDCG@20</td><td>Recall@20</td><td>NDCG@20</td></tr><tr><td>w/o GoT</td><td>0.0885*</td><td>0.0402*</td><td>0.0982*</td><td>0.0442*</td></tr><tr><td>w/o Adapter</td><td>0.0821*</td><td>0.0358*</td><td>0.0848*</td><td>0.0363*</td></tr><tr><td>w/o Text-image Alignment</td><td>0.0868*</td><td>0.0389*</td><td>0.0941*</td><td>0.0436*</td></tr><tr><td>w/o Text-graph Alignment</td><td>0.0901*</td><td>0.0402*</td><td>0.1068</td><td>0.0468*</td></tr><tr><td>GollaRec</td><td>0.0932*</td><td>0.0423</td><td>0.1112</td><td>0.0502</td></tr></table>
|
| 201 |
+
|
| 202 |
+
Table 3: Results for ablating the key components of GollaRec. * indicates a significant difference using a paired t-test with $p < 0.05$ .
|
| 203 |
+
|
| 204 |
+
Multi-domain recommendation: From the observed multi-domain recommendation results in Table 2, we observe the following findings: (1) We observe that GollaRec significantly outperforms all the baselines and GollaRec-CoT in most instances (except in 2 out of 60 instances) across the three used datasets. These results confirm GollaRec's capability in effectively leveraging both domain-specific and shared common knowledge within multi-domain recommendation settings. In addition, the results show that GoT is a more effective prompting technique than CoT, particularly in integrating the interaction graph information to generate effective user/item embeddings. (2) When comparing the multi-modal models (VBPR, MMGCL, BM3, GollaRec) with the multi-domain baselines (MOME, PLE, MGFN), we observe that the models from the former group, which incorporate multi-modal item contents, are consistently more effective. This result suggests that incorporating rich multi-modal data facilitates a more effective transfer of multi-modal semantic knowledge in recommendation scenarios. (3) When compared with the MLLM models (CLIP, BEiT-3, LLaVA), GollaRec demonstrates significant performance improvements on the three used multidomain datasets. Such a superior performance emphasises the importance to leverage the abundant interactions from the target domain in order to enhance the MLLM's understanding of the multidomain recommendation task.
|
| 205 |
+
|
| 206 |
+
# (RQ2): How do the key components of GollaRec affect the performance of the model?
|
| 207 |
+
|
| 208 |
+
We conduct an ablation study to assess the impact of the different components of GollaRec, including the GoT, Adaptor, Text-graph and Text-image alignment methods. We illustrate the results using the Clothing dataset for the general recommendation task and the Sports dataset for multi-domain recommendation task since we observe similar trends and conclusions across all the other used datasets. First, to gauge the effectiveness of GoT in GollaRec, we
|
| 209 |
+
|
| 210 |
+

|
| 211 |
+
|
| 212 |
+

|
| 213 |
+
Figure 4: The t-SNE visualisation of the item embeddings on the Sports and Clothing datasets. A star refers to a visual embedding while a pentagon represents a text embedding. The average MSE value indicates the average distance between the visual and textual embeddings.
|
| 214 |
+
|
| 215 |
+

|
| 216 |
+
|
| 217 |
+

|
| 218 |
+
|
| 219 |
+
conduct a comparative analysis by removing GoT and retaining the initial prompt for the task description. From Table 3, we observe that GollaRec significantly outperform its "w/o GoT" variant in all instances on both datasets. This result confirms that prompting GollaRec with a graph structure results in an improved recommendation performance. Next, we ablate the adapter used in our GollaRec model, so as to examine its usefulness. We observe a marked decrease in GollaRec's performance when removing the graph adapter (c.f. the "w/o Adaptor" variant in Table 3). This confirms the necessity of including personalised information such as the user-item interactions in order to capture domain-specific knowledge in GollaRec. Table 3 also shows that the "w/o Text-image Alignment" variant exhibits a reduced performance compared to GollaRec on both datasets. This highlights the importance of learning the relationships between the item textual descriptions and the images within the GoT prompt, in order to enhance the recommendation performance in both general and cross-domain recommendation tasks. In addition, we observe that GollaRec significantly outperforms its "w/o Text-graph Alignment" variant in 3 out of 4 instances. This result shows that the text-graph alignment method is overall promising in incorporating the graph information into GollaRec, thereby addressing Challenge C3 (refer to Section 2.2) by enhancing the MLLM's ability in interpreting graph patterns within the user-item graph.
|
| 220 |
+
|
| 221 |
+
<table><tr><td>Dataset</td><td colspan="2">General Rec (Clothing)</td><td colspan="2">Multi-domain Rec (Sports)</td></tr><tr><td>Variants</td><td>Recall@20</td><td>NDCG@20</td><td>Recall@20</td><td>NDCG@20</td></tr><tr><td>- RandomDemonstrationPos</td><td>0.0920†</td><td>0.0424†</td><td>0.1027</td><td>0.0465†</td></tr><tr><td>- RandomImagePos</td><td>0.0941†</td><td>0.0426†</td><td>0.1061†</td><td>0.0508†</td></tr><tr><td>- RandomLenTruction (80%)</td><td>0.0834</td><td>0.0366</td><td>0.0983</td><td>0.0440</td></tr><tr><td>- RandomLenTruction (60%)</td><td>0.0807</td><td>0.0334</td><td>0.0960</td><td>0.0425</td></tr><tr><td>GöllaRec</td><td>0.0932</td><td>0.0423</td><td>0.1112</td><td>0.0502</td></tr></table>
|
| 222 |
+
|
| 223 |
+
Table 4: Overall performance of GollaRec with different GoT lengths and text/image prompts' positions. ${}^{ \dagger }$ indicates an equivalent effectiveness using a two one-sided test (TOST) with $p < {0.05}$ .
|
| 224 |
+
|
| 225 |
+
# (RQ3): Does GollaRec exhibit a better integration of the item descriptions and images compared to the strongest baseline BM3?
|
| 226 |
+
|
| 227 |
+
To address challenge C2 (c.f. Section 2.2) and assess the effectiveness of integrating the item descriptions and images in our GoT prompt, we visualise the resulting embeddings to see if these modalities are closely aligned – an indicator of whether GollaRec integrates coherent semantics across different modalities within a unified semantic space. For conciseness and space constraints, we compare the results of GollaRec with the strongest performing baseline according to Table 1 and Table 2, namely BM3, in both the general (Clothing dataset) and multi-domain (Sports dataset) recommendation scenarios. Note however that we observe similar trends and conclusions with other baselines and datasets. We anticipate that a higher-quality multimodal embedding will exhibit cohesive distributions and lower MSE values, indicating that the corresponding model has effectively interpreted both modalities. In contrast, embeddings that are of poorer quality will likely appear more dispersed and should exhibit higher MSE values, indicating a lack of a comprehensive understanding of the modalities by the model. Figure 4 shows the visualisations of the obtained items' visual and textual embeddings on the Sports and Clothing datasets. From the figure, we observe that the items' visual and textual embeddings in BM3 are widely and distantly scattered across both datasets. Conversely, GollaRec's embeddings are cohesively distributed, resulting in a more unified semantic space on the Clothing and Sports datasets. In addition, the markedly lower average MSE values for GollaRec (1.66, 0.12) compared to BM3 (12.23, 5.87) on these datasets indicate that GollaRec appears to have successfully addressed challenge C2 by integrating coherent semantics across different modalities, thereby improving the model's performance in downstream recommendation tasks.
|
| 228 |
+
|
| 229 |
+
# (RQ4): How do the length of GoT and the position of demonstration steps affect the performance of our model?
|
| 230 |
+
|
| 231 |
+
Given that the prompt position and input length are crucial in language modelling (Navigli et al., 2023), we assess the GoT structure in our model by randomly shuffling the positions of demonstration prompts and images within GoT, and by constraining the maximum input token length. Table 4 reports the obtained results on the Clothing and Sports datasets. We do not report results on other datasets since they show similar trends and conclusions. Specifically, the "- RandomDemonstrationPos" variant randomly shuffles the positions of demonstration prompts within the GoT structure, where "demonstration" serves as an example for each step (Wei et al., 2022). From the table, we observe that GollaRec performs on par with its "- RandomDemonstrationPos" variant in 3 out of 4 instances across both datasets, suggesting that the random repositioning of the step-by-step prompts does not affect the model performance. Similarly, adjusting the position of the image prompt within GoT ("- RandomImagePos" variant) shows no positive effect on the recommendation performance. In addition, we assess the impact of constraining the GoT's length to $80\%$ and $60\%$ of the maximum token length, considering that the image tokens already use $25\%$ of the token capacity, and the other necessary tokens, such as the system's tokens, further reduce the remaining available length. From Table 4, we observe decreases in performance in the "- RandomLenTruction" variants when reducing the maximum input token length on both datasets. This result indicates that GollaRec relies on a richer GoT, which encapsulates essential demonstration prompts for the recommendation tasks. This finding indicates that GollaRec effectively addresses the problem of limited token length.
|
| 232 |
+
|
| 233 |
+
# 4 Related Work
|
| 234 |
+
|
| 235 |
+
There are two main bodies of related work, namely the use of LLMs in recommender systems, and leveraging chain-of-thought prompting approaches.
|
| 236 |
+
|
| 237 |
+
LLM for Recommendation: Recent recommendation approaches have used LLMs as inference models by designing prompts tailored to recommendation tasks (Geng et al., 2022; Gao et al., 2023; Zhang et al., 2024; Yi and Ounis, 2024). For example, P5 (Geng et al., 2022) employed a pre-trained T5 model to adapt the recommendation
|
| 238 |
+
|
| 239 |
+
tasks into natural language processing scenarios using personalised prompts. LMRecSys (Zhang et al., 2021b) converted the user-item interactions into textual prompts using item indexes. By doing so, they reformulated the recommendation task as a language modelling task. Unlike existing approaches, we construct a prompt that incorporates the user-item graph and item images in an MLLM, thereby providing richer contextualised information. We also devise CoT-like step-by-step demonstration prompts, each serving as an example for the reasoning process in the recommendation tasks. However, the use of LLMs for recommender systems often encounter difficulties in integrating the abundant user-item interactions. This is mainly due to the constraint of fixed input token length in the LLMs, which hinders their effectiveness in recommendation scenarios (Liu et al., 2023; Ren et al., 2024). To address this problem, we propose an adaptive truncation method that selectively maintains the most informative interactions within our devised GoT prompt. In particular, we integrate an initial ranking list, determined by a pre-trained recommender, into the step of the GoT's demonstration, thereby ensuring that the list length fits the token length constraint.
|
| 240 |
+
|
| 241 |
+
Chain-of-Thought: Chain-of-Thought (CoT) is a prompting technique that improves Large Language Models (LLMs) in domains requiring logical reasoning (Chu et al., 2024; Wang et al., 2022b), for example in mathematical reasoning (Ranaldi and Freitas, 2024). While Chain-of-Thought (CoT) is effective in linear textual reasoning, it becomes less useful with graph-related tasks (Wang et al., 2024), such as in recommendation tasks, which involve complex user-item interaction graphs. To address this problem of insufficient graph mining in typical CoT setups, we introduce the Graph-of-Thought (GoT) technique in our proposed GollaRec model, which is specifically tailored for recommender systems. In particular, we leverage a text-graph alignment method to capture the relationships inherent between the item textual embeddings and the corresponding node embeddings. Additionally, we apply graph instruction tuning on an MLLM to effectively integrate user-item graph data in the recommendation tasks.
|
| 242 |
+
|
| 243 |
+
# 5 Conclusions
|
| 244 |
+
|
| 245 |
+
In this paper, we introduced GollaRec, a novel Multi-modal Large Language Model (MLLM) for
|
| 246 |
+
|
| 247 |
+
both general and multi-domain recommendation tasks. Specifically, GollaRec consists of an MLLM (i.e. LLaVA), which handles the interaction graph information within a newly designed Graph-of-Thought (GoT) prompt, and a graph adapter to incorporate domain-specific knowledge for both recommendation tasks. Additionally, we devised an adaptive graph truncation method to maximize the number of high-potential items inputted into the GoT, thereby addressing the limited token length in the used MLLM. Our extensive experiments on 6 public benchmark datasets showed that GollaRec significantly outperformed 12 strong existing baselines for both tacked recommendation tasks. The performance improvement reaches up to $18.2\%$ in comparison to the strongest baseline model, BM3, on the used datasets. We also conducted an ablation study, which confirmed the significant contributions of the components of GollabRec, namely the GoT technique, the graph adapter, the text-graph and text-image alignment methods, to a more effective multi-modal recommendation.
|
| 248 |
+
|
| 249 |
+
# 6 Limitations
|
| 250 |
+
|
| 251 |
+
A potential limitation of this work is the manual design of the demonstrations in the introduced GoT technique for the recommendation tasks. Indeed, we have manually written several prompt candidates and selected the one with the best performance based on a set of representative examples. While we compared both manually designed prompts and those automatically generated by models like LLaMA3, we did not observe a significant performance difference between these prompts. Currently, we use a recommendation model (i.e., LightGCN) to determine the initial item list in our GoT prompt. However, exploring the use of more advanced models or similarity measures in the hidden space to refine this process is a potentially interesting direction. In particular, we aim to explore more advanced and deterministic prompt generation strategies (Zhang et al., 2022b; Shum et al., 2023), specifically tailored to recommender system tasks, to potentially further enhance performance. In addition, while in this paper we focused our work on the effectiveness of GollaRec in tackling general and multi-domain recommendation tasks, the application of the model to conversational recommender systems has not yet been fully explored. We leave the evaluation of GollaRec in multi-modal conversational scenarios as future work.
|
| 252 |
+
|
| 253 |
+
# References
|
| 254 |
+
|
| 255 |
+
Keqin Bao, Jizhi Zhang, Yang Zhang, Wenjie Wang, Fuli Feng, and Xiangnan He. 2023. Tallrec: An effective and efficient tuning framework to align large language model with recommendation. In Proc. of RecSys.
|
| 256 |
+
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. In Proc. of NeurIPS.
|
| 257 |
+
Zheng Chu, Jingchang Chen, Qianglong Chen, Weijiang Yu, Tao He, Haotian Wang, Weihua Peng, Ming Liu, Bing Qin, and Ting Liu. 2024. A survey of chain of thought reasoning: Advances, frontiers and future. In Proc. of ACL.
|
| 258 |
+
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proc. of NAACL.
|
| 259 |
+
Yunfan Gao, Tao Sheng, Youlin Xiang, Yun Xiong, Haofen Wang, and Jiawei Zhang. 2023. Chatrec: Towards interactive and explainable llms-augmented recommender system. arXiv preprint arXiv:2303.14524.
|
| 260 |
+
Shijie Geng, Shuchang Liu, Zuohui Fu, Yingqiang Ge, and Yongfeng Zhang. 2022. Recommendation as language processing (rlp): A unified pretrain, personalized prompt & predict paradigm (p5). In Proc. of RecSys.
|
| 261 |
+
Jiayan Guo, Lun Du, Hengyu Liu, Mengyu Zhou, Xinyi He, and Shi Han. 2023. Gpt4graph: Can large language models understand graph structured data? an empirical evaluation and benchmarking. arXiv preprint arXiv:2305.15066.
|
| 262 |
+
Ruining He and Julian McAuley. 2016a. Ups and downs: Modeling the visual evolution of fashion trends with one-class collaborative filtering. In Proc. of WWW.
|
| 263 |
+
Ruining He and Julian McAuley. 2016b. Vbpr: Visual bayesian personalized ranking from implicit feedback. In Proc. of AAAI.
|
| 264 |
+
Xiangnan He, Kuan Deng, Xiang Wang, Yan Li, Yong-dong Zhang, and Meng Wang. 2020. LightGCN: Simplifying and powering graph convolution network for recommendation. In Proc. of SIGIR.
|
| 265 |
+
Jie Huang and Kevin Chen-Chuan Chang. 2023. Towards reasoning in large language models: A survey. In Proc. of ACL (Findings).
|
| 266 |
+
Albert Q Jiang, Alexandre Sablayrolles, Antoine Roux, Arthur Mensch, Blanche Savary, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Emma Bou Hanna, Florian Bressand, et al. 2024. Mixtral of experts. arXiv preprint arXiv:2401.04088.
|
| 267 |
+
|
| 268 |
+
Jinhao Jiang, Kun Zhou, Zican Dong, Keming Ye, Wayne Xin Zhao, and Ji-Rong Wen. 2023. Structgpt: A general framework for large language model to reason over structured data. In Proc of EMNLP.
|
| 269 |
+
Woojeong Jin, Yu Cheng, Yelong Shen, Weizhu Chen, and Xiang Ren. 2022. A good prompt is worth millions of parameters: Low-resource prompt-based learning for vision-language models. In Proc. of ACL.
|
| 270 |
+
Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. 2020. Scaling laws for neural language models. arXiv preprint arXiv:2001.08361.
|
| 271 |
+
Bin Lei, Chunhua Liao, Caiwen Ding, et al. 2023. Boosting logical reasoning in large language models through a new framework: The chain of thought. arXiv preprint arXiv:2308.08614.
|
| 272 |
+
Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. 2024. Visual instruction tuning. In Proc. of NeurIPS.
|
| 273 |
+
Junling Liu, Chao Liu, Peilin Zhou, Renjie Lv, Kang Zhou, and Yan Zhang. 2023. Is chatgpt a good recommender? a preliminary study. arXiv preprint arXiv:2304.10149.
|
| 274 |
+
Jiaqi Ma, Zhe Zhao, Xinyang Yi, Jilin Chen, Lichan Hong, and Ed H Chi. 2018. Modeling task relationships in multi-task learning with multi-gate mixture-of-experts. In Proc. of SIGKDD.
|
| 275 |
+
Roberto Navigli, Simone Conia, and Björn Ross. 2023. Biases in large language models: origins, inventory, and discussion. Journal of Data and Information Quality.
|
| 276 |
+
Max Peeperkorn, Tom Kouwenhoven, Dan Brown, and Anna Jordanous. 2024. Is temperature the creativity parameter of large language models? arXiv preprint arXiv:2405.00492.
|
| 277 |
+
Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. 2021. Learning transferable visual models from natural language supervision. In Proc. of ICML.
|
| 278 |
+
Leonardo Ranaldi and Andre Freitas. 2024. Aligning large and small language models via chain-of-thought reasoning. In Proc. of EACL.
|
| 279 |
+
Xubin Ren, Wei Wei, Lianghao Xia, Lixin Su, Suqi Cheng, Junfeng Wang, Dawei Yin, and Chao Huang. 2024. Representation learning with large language models for recommendation. In Proc. of WWW.
|
| 280 |
+
Kashun Shum, Shizhe Diao, and Tong Zhang. 2023. Automatic prompt augmentation and selection with chain-of-thought from labeled data. In Proc. of EMNLP (Findings).
|
| 281 |
+
|
| 282 |
+
Hongyan Tang, Junning Liu, Ming Zhao, and Xudong Gong. 2020. Progressive layered extraction (ple): A novel multi-task learning (mtl) model for personalized recommendations. In Proc. of RecSys.
|
| 283 |
+
Jiabin Tang, Yuhao Yang, Wei Wei, Lei Shi, Lixin Su, Suqi Cheng, Dawei Yin, and Chao Huang. 2024. Graphgpt: Graph instruction tuning for large language models. In Proc. of SIGIR.
|
| 284 |
+
Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. 2023. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288.
|
| 285 |
+
Heng Wang, Shangbin Feng, Tianxing He, Zhaoxuan Tan, Xiaochuang Han, and Yulia Tsvetkov. 2024. Can language models solve graph problems in natural language? In Proc. of NeurIPS.
|
| 286 |
+
Wenhui Wang, Hangbo Bao, Li Dong, Johan Bjorck, Zhiliang Peng, Qiang Liu, Kriti Aggarwal, Owais Khan Mohammed, Saksham Singhal, Subhojit Som, et al. 2022a. Image as a foreign language: Beit pretraining for all vision and vision-language tasks. In Proc. of CVPR.
|
| 287 |
+
Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc V Le, Ed H Chi, Sharan Narang, Aakanksha Chowdhery, and Denny Zhou. 2022b. Self-consistency improves chain of thought reasoning in language models. In Proc. of ICLR.
|
| 288 |
+
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. 2022. Chain-of-thought prompting elicits reasoning in large language models. Proc. of NeurIPS.
|
| 289 |
+
Wei Wei, Xubin Ren, Jiabin Tang, Qinyong Wang, Lixin Su, Suqi Cheng, Junfeng Wang, Dawei Yin, and Chao Huang. 2024. Llmrec: Large language models with graph augmentation for recommendation. In Proc of WSDM.
|
| 290 |
+
Zhihao Wen and Yuan Fang. 2023. Augmenting low-resource text classification with graph-grounded pretraining and prompting. In Proc. of SIGIR.
|
| 291 |
+
Dan Xian, Shaozan Cui, Bo Wang, and Lishuai Cui. 2023. H&M personalized fashion product recommendation using lightgbmranker. In Proc. of ICBIS 2023.
|
| 292 |
+
Zixuan Yi, Zijun Long, Iadh Ounis, Craig Macdonald, and Richard Mccreadie. 2024a. Enhancing recommender systems: Deep modality alignment with large multi-modal encoders. Translations on Recommender Systems.
|
| 293 |
+
Zixuan Yi and Iadh Ounis. 2024. A unified graph transformer for overcoming isolations in multi-modal recommendation. In Proc. of RecSys.
|
| 294 |
+
|
| 295 |
+
Zixuan Yi and Iadh Ounis. 2025. A multi-modal recipe for improved multi-domain recommendation. In Proc. of ECIR.
|
| 296 |
+
Zixuan Yi, Iadh Ounis, and Craig Macdonald. 2023a. Contrastive graph prompt-tuning for cross-domain recommendation. Transactions on Information Systems.
|
| 297 |
+
Zixuan Yi, Iadh Ounis, and Craig Macdonald. 2023b. Graph contrastive learning with positional representation for recommendation. In Proc. of ECIR.
|
| 298 |
+
Zixuan Yi, Xi Wang, and Iadh Ounis. 2024b. A directional diffusion graph transformer for recommendation. arXiv preprint arXiv:2404.03326.
|
| 299 |
+
Zixuan Yi, Xi Wang, Iadh Ounis, and Craig Macdonald. 2022. Multi-modal graph contrastive learning for micro-video recommendation. In Proc. of SIGIR.
|
| 300 |
+
Junchi Yu, Ran He, and Zhitao Ying. 2023. Thought propagation: An analogical approach to complex reasoning with large language models. In Proc. of ICLR.
|
| 301 |
+
Seongjun Yun, Minbyul Jeong, Raehyun Kim, Jaewoo Kang, and Hyunwoo J Kim. 2019. Graph transformer networks. In Proc. of NeurIPS.
|
| 302 |
+
Fan Zhang, Qiuying Peng, Yulin Wu, Zheng Pan, Rong Zeng, Da Lin, and Yue Qi. 2022a. Multi-graph based multi-scenario recommendation in large-scale online video services. In Proc. of WWW.
|
| 303 |
+
Jinghao Zhang, Yanqiao Zhu, Qiang Liu, Shu Wu, Shuhui Wang, and Liang Wang. 2021a. Mining latent structures for multimedia recommendation. In Proc. of MM.
|
| 304 |
+
Junjie Zhang, Ruobing Xie, Yupeng Hou, Wayne Xin Zhao, Leyu Lin, and Ji-Rong Wen. 2024. Recommendation as instruction following: A large language model empowered recommendation approach. Transactions on Information Systems.
|
| 305 |
+
Yuhui Zhang, Hao Ding, Zeren Shui, Yifei Ma, James Zou, Anoop Deoras, and Hao Wang. 2021b. Language models as recommender systems: Evaluations and limitations. In Proc. of NeurIPS I (Still) Can't Believe It's Not Better Workshop.
|
| 306 |
+
Zhilu Zhang and Mert Sabuncu. 2018. Generalized cross entropy loss for training deep neural networks with noisy labels. In Proc. of NeurIPS.
|
| 307 |
+
Zhuosheng Zhang, Aston Zhang, Mu Li, and Alex Smola. 2022b. Automatic chain of thought prompting in large language models. In Proc. of ICLR.
|
| 308 |
+
Xin Zhou, Hongyu Zhou, Yong Liu, Zhiwei Zeng, Chunyan Miao, Pengwei Wang, Yuan You, and Feijun Jiang. 2023. Bootstrap latent representations for multi-modal recommendation. In Proc. of WWW.
|
| 309 |
+
|
| 310 |
+
# A Datasets and Baselines
|
| 311 |
+
|
| 312 |
+
In this section, we first describe the used datasets. Then, we introduce the used baselines in both the general and multi-modal recommendation tasks.
|
| 313 |
+
|
| 314 |
+
Dataset. Table 5 shows the statistics of the used general and multi-domain datasets. A cross-comparison of these statistics across the used 10 datasets shows a similar sparsity level.
|
| 315 |
+
|
| 316 |
+
Baselines. Table 6 summarises the 12 used baselines across different aspects for both the general and multi-domain recommendation tasks. We also show GollaRec for comparison. First, we describe the baseline models for the multi-modal general recommendation task:
|
| 317 |
+
|
| 318 |
+
(1) General Recommender: LightGCN (He et al., 2020) is a light graph neural recommender, characterised by the removal of non-linear activation functions and the weighted matrices in the feature propagation process.
|
| 319 |
+
(2) Multi-modal Recommenders: VBPR (He and McAuley, 2016b) is a recommendation model, which exclusively integrates the visual features with the user/item IDs into a matrix factorisation to allow recommendation; MMGCL (Yi et al., 2022) introduces a modality edge dropout and modality masking augmentations to the concatenated multi-modal user/item embeddings, enhancing multi-modal representation learning through a self-supervised learning paradigm; BM3 (Zhou et al., 2023) bootstraps the latent user/item representations in contrastive learning by reconstructing the user-item interaction graph to enhance the recommendation performance.
|
| 320 |
+
(3) Language-based Recommenders: P5 (Geng et al., 2022) is pre-trained on the user-item interaction data to adapt the LLMs for recommendations. It converts the recommendation tasks into tailored natural language sentences using personalised prompts; LMRecSys (Zhang et al., 2021b) transforms the recommendation task into a language modelling task by converting a user's interaction sequence into a nature language query.
|
| 321 |
+
(4) MLLM Models: CLIP (Radford et al., 2021) leverages a dual-stream transformer architecture to encode the image and text pairs. In this paper, we feed the item images and descriptions to CLIP and perform dot products between the obtained visual and textual embeddings when ranking items for the target users; BEiT-3 (Wang et al., 2022a) exhibits a unified transformer architecture to encode
|
| 322 |
+
|
| 323 |
+
<table><tr><td>Datasets</td><td>#Users</td><td>#Items</td><td>#Interactions</td><td>Sparsity</td></tr><tr><td colspan="5">General Recommendation</td></tr><tr><td>HM</td><td>27,883</td><td>2,742</td><td>185,297</td><td>99.76%</td></tr><tr><td>Clothing</td><td>39,387</td><td>22,499</td><td>185,297</td><td>99.99%</td></tr><tr><td>Baby</td><td>19,445</td><td>7,037</td><td>271,001</td><td>99.99%</td></tr><tr><td colspan="5">Multi-domain Recommendation</td></tr><tr><td>Food</td><td>115,349</td><td>39,670</td><td>1,027,413</td><td>99.99%</td></tr><tr><td>Home</td><td>731,913</td><td>185,552</td><td>6,451,926</td><td>99.99%</td></tr><tr><td>Clothing</td><td>39,387</td><td>23,033</td><td>237,488</td><td>99.97%</td></tr><tr><td>Office</td><td>87,436</td><td>25,986</td><td>684,837</td><td>99.97%</td></tr><tr><td>Pantry</td><td>13,101</td><td>4,898</td><td>126,962</td><td>99.82%</td></tr><tr><td>Electronics</td><td>192,403</td><td>63,001</td><td>1,689,188</td><td>99.99%</td></tr><tr><td>Sports</td><td>87,436</td><td>25,986</td><td>684,837</td><td>99.95%</td></tr></table>
|
| 324 |
+
|
| 325 |
+
Table 5: Statistics of the used datasets.
|
| 326 |
+
|
| 327 |
+
different modalities, with a mixture of modality experts replacing the feed-forward network of a standard Transformer so as to obtain visual and textual embeddings. Similar to CLIP, we perform dot products between the obtained visual and textual embeddings when ranking items for the target users; $LLaVA$ (Liu et al., 2024) leverages a frozen CLIP visual encoder and a large language model (e.g., Vicuna-7B) to encode the visual and textual inputs. In this paper, we provide as input the users' interaction sequence of the items' descriptions to estimate the user profile for ranking.
|
| 328 |
+
|
| 329 |
+
For the multi-modal multi-domain recommendation task, apart from the aforementioned multimodal models, we introduce the following multimodel baseline models:
|
| 330 |
+
|
| 331 |
+
(1) MOME (Ma et al., 2018) uses multiple expert networks and a gating network that selects a relevant subset of experts for each target recommendation domain, thereby enhancing the recommendation accuracy in those target domains.
|
| 332 |
+
(2) $PLE$ (Tang et al., 2020) distinguishes between the task-shared and task-specific experts and uses a progressive routing mechanism to dynamically route the target domain recommendations through the appropriate experts.
|
| 333 |
+
(3) MGFN (Zhang et al., 2022a) uses Graph Attention Networks to learn both intra-domain and inter-domain knowledge, enhancing the model's ability to facilitate recommendations across multiple domains.
|
| 334 |
+
|
| 335 |
+
# B Training and Hyperparameter Settings
|
| 336 |
+
|
| 337 |
+
As introduced in Section 2.2, we use LLaVA (Vicuna-7B) as the MLLM in our GollaRec model. Following LLaVA's setup (Liu et al., 2024), we maintain a consistent batch size, token length, and optimiser settings. Additionally, we adjust the number of input image prompts $num_v$ , temperature
|
| 338 |
+
|
| 339 |
+
<table><tr><td>Method</td><td>General</td><td>Multi-modal</td><td>MLLM</td><td>Multi-domain</td><td>Language-based</td></tr><tr><td>LightGCN</td><td>✓</td><td>×</td><td>×</td><td>×</td><td>×</td></tr><tr><td>VBPR</td><td>✓</td><td>✓</td><td>×</td><td>×</td><td>×</td></tr><tr><td>MMGCL</td><td>✓</td><td>✓</td><td>×</td><td>×</td><td>×</td></tr><tr><td>BM3</td><td>✓</td><td>✓</td><td>×</td><td>×</td><td>×</td></tr><tr><td>CLIP</td><td>×</td><td>×</td><td>✓</td><td>×</td><td>×</td></tr><tr><td>BEiT-3</td><td>×</td><td>×</td><td>✓</td><td>×</td><td>×</td></tr><tr><td>LLaVA</td><td>×</td><td>×</td><td>✓</td><td>×</td><td>×</td></tr><tr><td>MOME</td><td>×</td><td>×</td><td>×</td><td>✓</td><td>×</td></tr><tr><td>PLE</td><td>×</td><td>×</td><td>×</td><td>✓</td><td>×</td></tr><tr><td>MGFN</td><td>×</td><td>×</td><td>×</td><td>✓</td><td>×</td></tr><tr><td>P5</td><td>×</td><td>×</td><td>×</td><td>×</td><td>✓</td></tr><tr><td>LMRecSys</td><td>×</td><td>×</td><td>×</td><td>×</td><td>✓</td></tr><tr><td>GollaRec</td><td>✓</td><td>✓</td><td>✓</td><td>✓</td><td>✓</td></tr></table>
|
| 340 |
+
|
| 341 |
+
Table 6: Summary of the compared approaches.
|
| 342 |
+
|
| 343 |
+
tem, learning rate, warmup ratio, and weight decay to optimise performance across the used datasets using the validation sets. Table 7 summarises the training details of our GollaRec model.
|
| 344 |
+
|
| 345 |
+
# C Additional Experimental Results
|
| 346 |
+
|
| 347 |
+
In this section, we first evaluate the performance of GollaRec using additional LLaVA variants, including various model scales and different backbone LLMs, on both the general and multi-domain recommendation datasets. Then, we present a cold-start analysis, an assessment of GollaRec's time efficiency and a qualitative case study by comparing it with the strongest BM3 baseline. These details are not included in the main sections due to space constraints.
|
| 348 |
+
|
| 349 |
+
Performance of GollaRec with different LLaVA Variants. In the main results section (Section 3.2), we reported the performance of GollaRec using a base version (LLaVA Vicuna-7B) of the corresponding MLLM. To determine if GollaRec's effectiveness extends to other LLaVA variants, in this section, we evaluate the performance of additional GollaRec configurations with different LLaVA variants, including LLaVA Llama-7B, LLaVA Llama13B, LLaVA Mistral-7B, and LLaVA Vicuna-13B. These variants vary in terms of the model scale (7B vs. 13B) and employ different backbone architectures (Llama, Mistral, Vicuna), thereby offering insights into how changes in the model's configuration might affect its performance. For conciseness, we report the experiments on the Clothing and Sports datasets, since we observed consistent trends and conclusions on the other used datasets. Table 8 shows the performance comparison between our default GollaRec configuration (LLaVA Vicuna-7B) and alternative configurations using different LLaVA variants. We observe that our default GollaRec configuration maintains an effectiveness equivalent to that of a larger MLLM (LLaVA
|
| 350 |
+
|
| 351 |
+
Vicuna-13B). In addition, we observe that GollaRec (LLaVA Mistral-7B) exhibits an equivalent performance with our default GollaRec configuration in 2 out of 4 instances, while GollaRec (LLaVA Llama-7B) does not show a competitive performance. These results suggest that LLaVA Mistral-7B is also a good potential alternative MLLM for GollaRec, in contrast to LLaVA Llama-7B. When comparing GollaRec (LLaVA Llama-7B, LLaVA Vicuna-7B) with GollaRec (LLaVA Llama-13B, LLaVA Vicuna-13B), we do not observe marked changes on both datasets. This may indicate that, given the relatively small scale of the recommendation dataset (e.g., 22,499 item descriptions for the Clothing dataset), a larger MLLM does not seem to gain sufficient information to effectively leverage their larger parameter sets. This finding suggests a possible future research direction, which aims to explore whether the scaling rules (Kaplan et al., 2020) hold with larger-scale recommendation datasets.
|
| 352 |
+
|
| 353 |
+
Cold-start Analysis. To investigate the effectiveness of our GollaRec model using the GoT prompt, we examine our GollaRec in a cold-start scenario by focusing on users with fewer than 10 interactions. We conduct this analysis across all used datasets to assess how well GollaRec estimates the profiles of the cold-start users in both the general and multi-domain recommendation tasks. Table 9 shows the performances of GollaRec for both the cold-start and regular users, in comparison to the best-performing baseline, BM3, in terms of Recall@20 (since we observe the same conclusions on NDCG@20). From Table 9, we observe that GollaRec shows significant improvements in the cold-start users in comparison to BM3 and significantly according to the paired t-test on all six datasets. This observation suggests that GollaRec successfully leverages the GoT prompt and MLLM's world knowledge to bring useful information to estimate a user's preferences. In addition, in Table 9, GollaRec shows a larger improvement in cold-start users than regular users. We observe that our GollaRec model actually benefits the cold-start users more than the regular users. For example, on the HM dataset, GollaRec improves the performance by $21.73\%$ for the cold-start users in comparison to BM3, while it only improves the performance by just $9.66\%$ for the regular users. This result suggests that the GollaRec model successfully leverages the GoT prompt to incorporate more useful
|
| 354 |
+
|
| 355 |
+
<table><tr><td>Dataset</td><td>HM</td><td>Clothing</td><td>Baby</td><td>Pantry</td><td>Electronics</td><td>Sports</td></tr><tr><td>LLM Configuration</td><td>Vicuna-7B</td><td>Vicuna-7b-delta</td><td>Vicuna-7B</td><td>Vicuna-7B</td><td>Vicuna-7B</td><td>Vicuna-7B</td></tr><tr><td>Learning Rate</td><td>1e-3</td><td>3e-3</td><td>1e-4</td><td>2e-4</td><td>2e-3</td><td>5e-4</td></tr><tr><td>Batch Size</td><td>8</td><td>8</td><td>8</td><td>8</td><td>8</td><td>8</td></tr><tr><td>Maximum Input Length</td><td>2048</td><td>2048</td><td>2048</td><td>2048</td><td>2048</td><td>2048</td></tr><tr><td>Training Steps</td><td>50,000</td><td>50,000</td><td>60,000</td><td>60,000</td><td>50,000</td><td>60,000</td></tr><tr><td>Warmup Ratio</td><td>0.03</td><td>0.03</td><td>0.01</td><td>0.03</td><td>0.01</td><td>0.03</td></tr><tr><td>Weight Decay</td><td>0</td><td>0.01</td><td>0</td><td>0</td><td>0</td><td>0.01</td></tr><tr><td>Optimiser</td><td>AdamW</td><td>AdamW</td><td>AdamW</td><td>AdamW</td><td>AdamW</td><td>AdamW</td></tr></table>
|
| 356 |
+
|
| 357 |
+
Table 7: Hyperparameters of GollaRec on the Clothing and Sports datasets
|
| 358 |
+
|
| 359 |
+
<table><tr><td>Dataset</td><td colspan="2">General Rec (Clothing)</td><td colspan="2">Multi-domain Rec (Sports)</td></tr><tr><td>GollaRec Variants</td><td>Recall@20</td><td>NDCG@20</td><td>Recall@20</td><td>NDCG@20</td></tr><tr><td>-LLaVA Llama-7B</td><td>0.0822</td><td>0.0383</td><td>0.0947</td><td>0.0434</td></tr><tr><td>-LLaVA Llama-13B</td><td>0.0846</td><td>0.0395†</td><td>0.0936†</td><td>0.0430</td></tr><tr><td>-LLaVA Mistral-7B</td><td>0.0904</td><td>0.0411†</td><td>0.1076†</td><td>0.0503</td></tr><tr><td>-LLaVA Vicuna-13B</td><td>0.0946†</td><td>0.0442†</td><td>0.1103†</td><td>0.0506†</td></tr><tr><td>GollaRec (LLaVA Vicuna-7B)</td><td>0.0932</td><td>0.0423</td><td>0.1112</td><td>0.0502</td></tr></table>
|
| 360 |
+
|
| 361 |
+
Table 8: Overall performance of GollaRec with different GoT lengths and text/image prompts' positions. ${}^{ \dagger }$ indicates an effectiveness equivalence using a two one-sided equivalence test (TOST) with $p < {0.05}$ .
|
| 362 |
+
Table 9: Cold-start analysis results for GollaRec and the BM3 baseline. 'Cold-start' denotes the cold-start users and 'Regular' denotes the regular users in the used datasets. * denotes a significant difference between GollaRec and BM3 using the paired t-test with $p < {0.05}$ .
|
| 363 |
+
|
| 364 |
+
<table><tr><td>Datasets</td><td></td><td>GollaRec (Recall@20)</td><td>BM3 (Recall@20)</td><td>%Improv.</td></tr><tr><td rowspan="2">HM</td><td>Cold-start</td><td>0.1574*</td><td>0.1293</td><td>21.73%</td></tr><tr><td>Regular</td><td>0.2134*</td><td>0.1946</td><td>9.66%</td></tr><tr><td rowspan="2">Clothing</td><td>Cold-start</td><td>0.0782*</td><td>0.0592</td><td>32.09%</td></tr><tr><td>Regular</td><td>0.1208*</td><td>0.1072</td><td>12.68%</td></tr><tr><td rowspan="2">Baby</td><td>Cold-start</td><td>0.0672*</td><td>0.0603</td><td>11.44%</td></tr><tr><td>Regular</td><td>0.1252</td><td>0.1124</td><td>11.39%</td></tr><tr><td rowspan="2">Pantry</td><td>Cold-start</td><td>0.0734*</td><td>0.0586</td><td>25.26%</td></tr><tr><td>Regular</td><td>0.1572*</td><td>0.1347</td><td>16.70%</td></tr><tr><td rowspan="2">Electronics</td><td>Cold-start</td><td>0.0633*</td><td>0.0550</td><td>15.09%</td></tr><tr><td>Regular</td><td>0.0984*</td><td>0.0887</td><td>10.94%</td></tr><tr><td rowspan="2">Sports</td><td>Cold-start</td><td>0.0784*</td><td>0.0651</td><td>20.43%</td></tr><tr><td>Regular</td><td>0.1462*</td><td>0.1315</td><td>11.18%</td></tr></table>
|
| 365 |
+
|
| 366 |
+
context and multi-modal information to a cold-start user, thereby enriching the representations of users with sparse interactions in both general and multidomain recommendation tasks.
|
| 367 |
+
|
| 368 |
+
Hyper-parameter Study. We now study the sensitivity of our GollaRec model to the hyperparameters. We primarily analyse two important parameters in our GollaRec model, namely: (i) the number of image prompts, denoted as $num_v$ , which determines the count of a target user's historically interacted items included in the input; and (ii) the temperature parameter, denoted as $tem$ , which influences the diversity of the outputs from the utilized MLLM. Higher values of $tem$ enable a broader exploration and more varied responses, while lower values promote more deterministic and consistent outputs (Peeperkorn et al., 2024). Fig-
|
| 369 |
+
|
| 370 |
+

|
| 371 |
+
1
|
| 372 |
+
|
| 373 |
+

|
| 374 |
+
|
| 375 |
+

|
| 376 |
+
Electronics
|
| 377 |
+
Figure 5: Performance of our GollaRec model with respect to different $num_v$ on the HM and Pantry datasets.
|
| 378 |
+
|
| 379 |
+

|
| 380 |
+
Pun
|
| 381 |
+
|
| 382 |
+

|
| 383 |
+
|
| 384 |
+

|
| 385 |
+
Sports
|
| 386 |
+
|
| 387 |
+
ure 5 and Figure 6 show the performance improvement of GollaRec compared to the best-performing baseline BM3 with different values of $num_{v}$ and $tem$ , respectively.
|
| 388 |
+
|
| 389 |
+
As discussed in Section 2.2, we incorporate the image of the last interacted item of the target user into our GollaRec model to estimate user profiles in a multi-modal approach. This $num_{v}$ parameter indicates the number of historical visual interactions included in the input to our GollaRec model. Due to the constraint of the input token length, we vary $num_{v}$ within the range $\{0,1,2,3\}$ with a step size of 1. From Figure 5, we can observe that the best performance of our GollaRec model generally occurs at 1 across all the used datasets. These results highlight the importance of balancing the visual data and managing the limited token space available, suggesting that a single image is more beneficial than none or multiple images in the recommendation tasks.
|
| 390 |
+
|
| 391 |
+

|
| 392 |
+
|
| 393 |
+

|
| 394 |
+
|
| 395 |
+

|
| 396 |
+
|
| 397 |
+

|
| 398 |
+
|
| 399 |
+

|
| 400 |
+
Figure 6: Performance of our GollaRec model with respect to different tem on the HM and Pantry datasets.
|
| 401 |
+
|
| 402 |
+

|
| 403 |
+
|
| 404 |
+
We also assess the impact of different values of temp, which regulate the diversity of the outputs from the utilized MLLM. By adjusting temp, we aim to produce a more diverse and effective user embedding for the target user. We vary temp within the range \{0.1, 0.2, ..., 1.0\} with a step size of 0.1. From Figure 6, we observe that our GollaRec model reaches its peak performance on the used Cloth and Pantry datasets when $temp = 0.2$ and $temp = 0.6$ . These results indicate that the optimal temp value may vary based on the specific recommendation scenario, emphasising the importance of careful parameter tuning to achieve the best results.
|
| 405 |
+
|
| 406 |
+
Training & Inference Time. In this work, we use a graph instruction tuning task to train GollaRec in order to enhance the understanding of graph structures within the user-item interaction graph. We report on the average training and inference times per epoch of our GollaRec model across various datasets, as illustrated in Table 10. Since our primary focus is to demonstrate how effectively MLLMs can be adapted to generate effective recommendations, we therefore specify the time durations for training and inference of our model. These results help us to evaluate the effectiveness and efficiency of GollaRec in practical recommendation scenarios, without directly comparing these time metrics to other baselines.
|
| 407 |
+
|
| 408 |
+
Table 10 shows the average training and infer
|
| 409 |
+
|
| 410 |
+
Table 10: Time efficiency comparison on the HM dataset in terms of training and inference time.
|
| 411 |
+
|
| 412 |
+
<table><tr><td>Model</td><td>Training Time</td><td>Inference Time</td><td>Recall@20</td></tr><tr><td>GollaRec</td><td>24.66h</td><td>5.54s</td><td>0.1880</td></tr><tr><td>LLaVA</td><td>-</td><td>5.36s</td><td>0.1346</td></tr><tr><td>LLaMA 3.2</td><td>-</td><td>8.32s</td><td>0.1242</td></tr><tr><td>GPT-4V</td><td>-</td><td>6.72s</td><td>0.1043</td></tr></table>
|
| 413 |
+
|
| 414 |
+
ence time compared with existing MLLMs on the HM dataset. We observe similar conclusions across other used datasets. We also provide additional results of recent open and close-sourced MLLMs (i.e., LLaVA, Llama 3.2, GPT-4V) for a fair comparison. In terms of the effectiveness of these MLLMs in Table 10, we report their recommendation performances in the zero-shot setting. Moreover, we do not report training times for LLaVA, Llama 3.2 and GPT-4V as they are not trained with our graph instruction tuning method. From Table 10, we observe that there are no significant differences in the inference time between our GollaRec and other MLLMs, thereby indicating that our GollaRec model exhibits comparable time efficiency while providing more effective recommendations compared to existing MLLMs.
|
| 415 |
+
|
| 416 |
+
Case Study. Although the proposed GollaRec model showed a promising effectiveness in our quantitative evaluation in Table 1 and Table 2, it is worth investigating its recommendation outcomes and decision-making processes from a qualitative analysis. Therefore, we present a case study using the Sports dataset in Figure 7, illustrating the used GoT prompt, and the recommended items by our GollaRec model in comparison to those items recommended by the strongest baseline model BM3. We ensure the representativeness of our results by selecting a user that shows a median level of performance improvement compared to BM3 in our user pool. From Figure 7, we observe that the user's interest is primarily in camping and outdoor equipment. GollaRec accurately recommends three relevant products in its top 20 rankings: a camp chair (#247), an outdoor tool - a knife (#15219), and another outdoor tool - a carabiner (#9168). In contrast, BM3's recommendations are less effective and only include two relevant items: an outdoor tool - a knife (#15219) and another outdoor tool - a carabiner (#9168). This comparison between GollaRec and BM3 demonstrates that GollaRec effectively leverages the GoT prompt to adapt a powerful MLLM to the recommendation task by adequately integrating relevant and diverse graph information within GoT.
|
| 417 |
+
|
| 418 |
+
# Case Study
|
| 419 |
+
|
| 420 |
+
Prompt_text: You are a product recommendation system and required to recommend user with product based on user historical items: item 1122 <Coleman Tent Light Coleman Sports & Outdoors Outdoor Gear Camping Hiking Lights Lanterns Lanterns>; item 2362 <Schwinn Katana Road Bike (54cm Frame) Sports & Outdoors Cycling Bikes Road Bikes The Schwinn S2704 700c Men's Katana Bicycle> ... item 921 <Victorinox Explorer Victorinox Sports & Outdoors Outdoor Gear Camping Hiking Knives Tools Folding Knives>; that each product with title (similar title), category (similar category) and description (similar description). Please output the user preference by ranking the index of the user's candidate items: item 2316 <OnGuard Bulldog MINI TC 5013TC Bicycle U-Lock Sports & Outdoors Cycling Accessories Bike Locks U-Locks 3-Millimeter hardened ultra steel shackle>; item 247 <Picnic Time Portable Reclining Camp Chair, Black/Gray Sports & Outdoors Outdoor Gear Camping Hiking>; item 17281 <KRuger 10/22 Receiver Stock Takedown Cap Screw Sports & Outdoors Hunting & Fishing Hunting Gun Accessories Gun Stocks Ruger 10/22 stainless steel takedown screw to attach receiver to stock for 10/22, Charger, Elite22 and Cohort. Allen wrench cap screw Type.>; ... item 4618 <Gear Aid ReviveX Nubuck, Suede amp; Fabric Water Repellant, 4-Ounce McNett Sports & Outdoors Outdoor Gear Camping & Hiking Personal Care Insect Repellant When you need to waterproof hiking boots, protect suede shoes or weatherproof GORE-TEX footwear, only one water repellent will do>; but not user historical interacted items. Enclosed is the image of user's last purchased item <image>.
|
| 421 |
+
|
| 422 |
+

|
| 423 |
+
Prompt_image:
|
| 424 |
+
Figure 7: User #826 on the Sports dataset. The items with descriptions highlighted in red represent the correct recommendations in the test set.
|
| 425 |
+
|
| 426 |
+
# GoT_steps:
|
| 427 |
+
|
| 428 |
+
Let's think about the recommended item list in a step-by-step manner.
|
| 429 |
+
|
| 430 |
+
Consider key attributes—titles, categories, descriptions of each candidate item from historical items and an interaction graph <graph_begin, ..., graph_token4, ..., graph_end>.
|
| 431 |
+
|
| 432 |
+
Then, consider to review an initial ranking list: item 14517 <Trek Light Gear Double Hammock Sports & Outdoors Outdoor Gear Camping Hiking Camping Furniture Cots Hammocks The Trek Light Double Hammock represents the perfect blend of comfort and function in an extremely lightweight and portable design.>; item 7195 <Sea to Summit X Plate Sports & Outdoors Outdoor Gear Camping Hiking Camp Kitchen Dishes Utensils Plates Introducing three innovative items for your next meal outdoors collapsible hybrids comprised of a rigid, cut-resistant base made of high temperature, food-grade nylon and walls made of food-grade flexible silicone>; item 15219 <Kershaw Amphibian - Kydex Sheath Knife Kershaw Sports & Outdoors Hunting Fishing Hunting Hunting Tactical Knives Hunting Knives>; item 836 <Russell Athletic Men's Big amp; Tall Basic Crew Neck Sweatshirt Sports & Outdoors Clothing Men Sweatshirts>; ... item 15219; ..., item 247; ... generated using a graph recommender.
|
| 433 |
+
|
| 434 |
+
Then, consider the semantic similarity between the last purchased item and the items in the candidate set in both visual and textual modalities. The final ranking list is derived from the candidate items.
|
| 435 |
+
|
| 436 |
+
# GollaRec's top-20 recommendation:
|
| 437 |
+
|
| 438 |
+
item 7146 <180 Tack Snow And Ash Pan 2 Piece 180ST-AP2P-s 180 Tack Sports& Outdoors Outdoor Gear Camping Hiking Camp Kitchen Camp Stoves>; item 247 <Picnic Time Portable Reclining Camp Chair, Black/Gray Sports & Outdoors Outdoor Gear Camping Hiking>; ... item 15219 <Kershaw Amphibian - Kydex Sheath Knife Kershaw Sports & Outdoors Hunting Fishing Hunting Hunting Tactical Knives Hunting Knives>; item 7146 <180 Tack Snow And Ash Pan 2 Piece 180ST-AP2P-s 180 Tack Sports& Outdoors Outdoor Gear Camping Hiking Camp Kitchen Camp Stoves>; ... item 9168 <Nite Ize SBP2-03-01BG S-Biner Plastic Size-2 Double Gated Carabiner, Black Nite Ize Sports & Outdoors>;
|
| 439 |
+
|
| 440 |
+
# BM3's top-20 recommendation:
|
| 441 |
+
|
| 442 |
+
item 852 <Coghlan Fuel Tablets Coghlan Sports & Outdoors Outdoor Gear Camping Hiking Camp Kitchen Coghlan's 9565 Fuel Tablets - For use with Coghlan's Emergency Stove. A safe, clean burning fuel, that is easy to ignite. These solid fuel tablets will burn for 9 minutes. 24 Tablets per box.>; ...item 14618 <Survivor HK-690 Series Survival Knife 8.5-Inch Overall Survivor Sports & Outdoors Hunting Fishing Tactical Duty Tactical Knives >; item 15219 <Kershaw Amphibian - Kydex Sheath Knife Kershaw Sports & Outdoors Hunting Fishing Hunting Tactical Knives Hunting Knives>; ... item 9168 <Nite Ize SBP2-03-01BG S-Biner Plastic Size-2 Double Gated Carabiner, Black Nite Ize Sports & Outdoors>; ... item 10372 <earl Izumi Men's Elite Thermal Cycling Tight Sports & Outdoors Cycling Clothing Men Pants ELITE Thermal Fleece fabric panels provide superior moisture transfer and warmth ELITE 3D Chamois Constructed leg articulation for a full range of motion 8" lower leg zipper with internal draft flap and zipper garage Contoured leg opening provides additional coverage at top of shoe Silicone gripper at ankles to keep tights in place 360 degree reflectivity>;
|
2025/A Multi-modal Large Language Model with Graph-of-Thought for Effective Recommendation/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:2f0c04b9199dac3ca0903a8fa8759c014f6f270e6c8f9d5941099f7d3049895c
|
| 3 |
+
size 571802
|
2025/A Multi-modal Large Language Model with Graph-of-Thought for Effective Recommendation/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2025/A Novel Computational Modeling Foundation for Automatic Coherence Assessment/d0cb348e-9496-4e82-aed0-30776483e396_content_list.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|