Datasets:

Modalities:
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
Dask
License:
pinzhenchen commited on
Commit
93ce375
·
verified ·
1 Parent(s): 00fdf1a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -2
README.md CHANGED
@@ -436,8 +436,11 @@ language:
436
  - xh
437
  ---
438
 
 
 
439
  # DocHPLT: A Massively Multilingual Document-Level Translation Dataset
440
 
 
441
  Existing document-level machine translation resources are only available for a handful of languages, mostly high-resourced ones. To facilitate the training and evaluation of document-level translation and, more broadly, long-context modeling for global communities, we create DocHPLT, the largest publicly available document-level translation dataset to date. It contains 124 million aligned document pairs across 50 languages paired with English, comprising 4.26 billion sentences, with further possibility to provide 2500 bonus pairs not involving English. Unlike previous reconstruction-based approaches that piece together documents from sentence-level data, we modify an existing web extraction pipeline to preserve complete document integrity from the source, retaining all content including un- aligned portions. After our preliminary experiments identify the optimal training context strategy for document-level translation, we demonstrate that LLMs fine-tuned on DocHPLT substantially outperform off-the-shelf instruction- tuned baselines, with particularly dramatic improvements for under-resourced languages. We open-source the dataset under a permissive license, providing essential infrastructure for advancing multilingual document-level translation.
442
 
443
 
@@ -498,7 +501,7 @@ Existing document-level machine translation resources are only available for a h
498
  | **xh** | 995,556 | 21,561 |
499
  | **total** | **4,264,894,818** | **87,775,169** |
500
 
501
- Link for Arxiv preprint: [https://arxiv.org/abs/2508.13079](https://arxiv.org/abs/2508.13079)
502
 
503
  ## Citation
504
 
@@ -509,6 +512,6 @@ If you use this resource, please kindly cite:
509
  author={Dayyán O'Brien and Bhavitvya Malik and Ona de Gibert and Pinzhen Chen and Barry Haddow and Jörg Tiedemann},
510
  year={2025},
511
  journal={arXiv preprint},
512
- url={[https://arxiv.org/abs/2508.13079](https://arxiv.org/abs/2508.13079)},
513
  }
514
  ```
 
436
  - xh
437
  ---
438
 
439
+ <em><strong>We plan make the data accessible in late September.</strong></em>
440
+
441
  # DocHPLT: A Massively Multilingual Document-Level Translation Dataset
442
 
443
+
444
  Existing document-level machine translation resources are only available for a handful of languages, mostly high-resourced ones. To facilitate the training and evaluation of document-level translation and, more broadly, long-context modeling for global communities, we create DocHPLT, the largest publicly available document-level translation dataset to date. It contains 124 million aligned document pairs across 50 languages paired with English, comprising 4.26 billion sentences, with further possibility to provide 2500 bonus pairs not involving English. Unlike previous reconstruction-based approaches that piece together documents from sentence-level data, we modify an existing web extraction pipeline to preserve complete document integrity from the source, retaining all content including un- aligned portions. After our preliminary experiments identify the optimal training context strategy for document-level translation, we demonstrate that LLMs fine-tuned on DocHPLT substantially outperform off-the-shelf instruction- tuned baselines, with particularly dramatic improvements for under-resourced languages. We open-source the dataset under a permissive license, providing essential infrastructure for advancing multilingual document-level translation.
445
 
446
 
 
501
  | **xh** | 995,556 | 21,561 |
502
  | **total** | **4,264,894,818** | **87,775,169** |
503
 
504
+ Link to arXiv preprint: [https://arxiv.org/abs/2508.13079](https://arxiv.org/abs/2508.13079)
505
 
506
  ## Citation
507
 
 
512
  author={Dayyán O'Brien and Bhavitvya Malik and Ona de Gibert and Pinzhen Chen and Barry Haddow and Jörg Tiedemann},
513
  year={2025},
514
  journal={arXiv preprint},
515
+ url={https://arxiv.org/abs/2508.13079},
516
  }
517
  ```