Datasets:
File size: 5,129 Bytes
95a16ea 36e0e2c 95a16ea 662bbd1 36e0e2c 95a16ea |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 |
---
license: cc-by-sa-4.0
dataset_info:
- config_name: de-enGB
features:
- name: source
dtype: large_string
- name: translation_A
dtype: large_string
- name: translation_B
dtype: large_string
- name: A
dtype: bool
- name: equal
dtype: bool
- name: B
dtype: bool
- name: label_A
dtype: large_string
- name: label_B
dtype: large_string
- name: text
dtype: large_string
- name: text_type
dtype: large_string
splits:
- name: train
num_bytes: 99615
num_examples: 390
download_size: 51640
dataset_size: 99615
- config_name: de-frCH
features:
- name: source
dtype: large_string
- name: translation_A
dtype: large_string
- name: translation_B
dtype: large_string
- name: A
dtype: bool
- name: equal
dtype: bool
- name: B
dtype: bool
- name: label_A
dtype: large_string
- name: label_B
dtype: large_string
- name: text
dtype: large_string
- name: text_type
dtype: large_string
splits:
- name: train
num_bytes: 106345
num_examples: 385
download_size: 55015
dataset_size: 106345
- config_name: de-itCH
features:
- name: source
dtype: large_string
- name: translation_A
dtype: large_string
- name: translation_B
dtype: large_string
- name: A
dtype: bool
- name: equal
dtype: bool
- name: B
dtype: bool
- name: label_A
dtype: large_string
- name: label_B
dtype: large_string
- name: text
dtype: large_string
- name: text_type
dtype: large_string
splits:
- name: train
num_bytes: 102833
num_examples: 378
download_size: 54128
dataset_size: 102833
- config_name: en-deCH
features:
- name: source
dtype: large_string
- name: translation_A
dtype: large_string
- name: translation_B
dtype: large_string
- name: A
dtype: bool
- name: equal
dtype: bool
- name: B
dtype: bool
- name: label_A
dtype: large_string
- name: label_B
dtype: large_string
- name: text
dtype: large_string
- name: text_type
dtype: large_string
splits:
- name: train
num_bytes: 99510
num_examples: 330
download_size: 49779
dataset_size: 99510
configs:
- config_name: de-enGB
data_files:
- split: train
path: de-enGB/train-*
- config_name: de-frCH
data_files:
- split: train
path: de-frCH/train-*
- config_name: de-itCH
data_files:
- split: train
path: de-itCH/train-*
- config_name: en-deCH
data_files:
- split: train
path: en-deCH/train-*
task_categories:
- translation
language:
- en
- de
- it
- fr
tags:
- Supertext
- DeepL
- Translation
- A/B-test
pretty_name: A/B Test Supertext vs DeepL
size_categories:
- 1K<n<10K
---
# A/B Test Supertext vs DeepL
We release all evaluation data and scripts for further analysis and reproduction of the accompanying paper: [A comparison of translation performance between DeepL and Supertext](https://arxiv.org/abs/2502.02577).
The data consists of document-level translations by Supertext and DeepL as well as accompanying ratings by professional translators. Please find more details in the paper.
Please note that the empty lines correspond to paragraph boundaries (i.e., double line breaks) in the original documents.
``` python
# for each language pair, there is a separate subset
data = load_dataset("Supertext/mt-doclevel-ab-test", "en-deCH")
```
## Dataset Details
As strong machine translation (MT) systems are increasingly based on large language models (LLMs), reliable quality benchmarking requires methods that capture their ability to leverage extended context. This study compares two commercial MT systems -- DeepL and Supertext -- by assessing their performance on unsegmented texts. We evaluate translation quality across four language directions with professional translators assessing segments with full document-level context. While segment-level assessments indicate no strong preference between the systems in most cases, document-level analysis reveals a preference for Supertext in three out of four language directions, suggesting superior consistency across longer texts. We advocate for more context-sensitive evaluation methodologies to ensure that MT quality assessments reflect real-world usability.
### Citation
If you use any of the data released in our work, please cite the following paper:
```
@misc{flückiger2025comparisontranslationperformancedeepl,
title={A comparison of translation performance between DeepL and Supertext},
author={Alex Flückiger and Chantal Amrhein and Tim Graf and Frédéric Odermatt and Martin Pömsl and Philippe Schläpfer and Florian Schottmann and Samuel Läubli},
year={2025},
eprint={2502.02577},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2502.02577},
}
```
### Dataset Description
- **Curated by:** Supertext
- **Language(s) (NLP):** English, French, German, Italian
- **License:** CC BY-SA 4.0
### Dataset Sources
- **Repository:** https://github.com/Supertext/evaluation_deepl_supertext
- **Paper:** https://arxiv.org/abs/2502.02577
|