Datasets:
note on accessibility
Browse files
README.md
CHANGED
|
@@ -2158,6 +2158,8 @@ language:
|
|
| 2158 |
- xh
|
| 2159 |
---
|
| 2160 |
|
|
|
|
|
|
|
| 2161 |
# DocHPLT: A Massively Multilingual Document-Level Translation Dataset
|
| 2162 |
|
| 2163 |
Existing document-level machine translation resources are only available for a handful of languages, mostly high-resourced ones. To facilitate the training and evaluation of document-level translation and, more broadly, long-context modeling for global communities, we create DocHPLT, the largest publicly available document-level translation dataset to date. It contains 124 million aligned document pairs across 50 languages paired with English, comprising 4.26 billion sentences, with further possibility to provide 2500 bonus pairs not involving English. Unlike previous reconstruction-based approaches that piece together documents from sentence-level data, we modify an existing web extraction pipeline to preserve complete document integrity from the source, retaining all content including un- aligned portions. After our preliminary experiments identify the optimal training context strategy for document-level translation, we demonstrate that LLMs fine-tuned on DocHPLT substantially outperform off-the-shelf instruction- tuned baselines, with particularly dramatic improvements for under-resourced languages. We open-source the dataset under a permissive license, providing essential infrastructure for advancing multilingual document-level translation.
|
|
@@ -2220,7 +2222,7 @@ Existing document-level machine translation resources are only available for a h
|
|
| 2220 |
| **xh** | 995,556 | 21,561 |
|
| 2221 |
| **total** | **4,264,894,818** | **87,775,169** |
|
| 2222 |
|
| 2223 |
-
Link for
|
| 2224 |
|
| 2225 |
## Citation
|
| 2226 |
|
|
@@ -2231,6 +2233,6 @@ If you use this resource, please kindly cite:
|
|
| 2231 |
author={Dayyán O'Brien and Bhavitvya Malik and Ona de Gibert and Pinzhen Chen and Barry Haddow and Jörg Tiedemann},
|
| 2232 |
year={2025},
|
| 2233 |
journal={arXiv preprint},
|
| 2234 |
-
url={
|
| 2235 |
}
|
| 2236 |
```
|
|
|
|
| 2158 |
- xh
|
| 2159 |
---
|
| 2160 |
|
| 2161 |
+
<em><strong>At the moment the data has gated access. We plan to make it openly accessible in late September</strong></em>
|
| 2162 |
+
|
| 2163 |
# DocHPLT: A Massively Multilingual Document-Level Translation Dataset
|
| 2164 |
|
| 2165 |
Existing document-level machine translation resources are only available for a handful of languages, mostly high-resourced ones. To facilitate the training and evaluation of document-level translation and, more broadly, long-context modeling for global communities, we create DocHPLT, the largest publicly available document-level translation dataset to date. It contains 124 million aligned document pairs across 50 languages paired with English, comprising 4.26 billion sentences, with further possibility to provide 2500 bonus pairs not involving English. Unlike previous reconstruction-based approaches that piece together documents from sentence-level data, we modify an existing web extraction pipeline to preserve complete document integrity from the source, retaining all content including un- aligned portions. After our preliminary experiments identify the optimal training context strategy for document-level translation, we demonstrate that LLMs fine-tuned on DocHPLT substantially outperform off-the-shelf instruction- tuned baselines, with particularly dramatic improvements for under-resourced languages. We open-source the dataset under a permissive license, providing essential infrastructure for advancing multilingual document-level translation.
|
|
|
|
| 2222 |
| **xh** | 995,556 | 21,561 |
|
| 2223 |
| **total** | **4,264,894,818** | **87,775,169** |
|
| 2224 |
|
| 2225 |
+
Link for arXiv preprint: [https://arxiv.org/abs/2508.13079](https://arxiv.org/abs/2508.13079)
|
| 2226 |
|
| 2227 |
## Citation
|
| 2228 |
|
|
|
|
| 2233 |
author={Dayyán O'Brien and Bhavitvya Malik and Ona de Gibert and Pinzhen Chen and Barry Haddow and Jörg Tiedemann},
|
| 2234 |
year={2025},
|
| 2235 |
journal={arXiv preprint},
|
| 2236 |
+
url={https://arxiv.org/abs/2508.13079},
|
| 2237 |
}
|
| 2238 |
```
|