aflueckiger commited on
Commit
36e0e2c
·
verified ·
1 Parent(s): 95a16ea

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -1
README.md CHANGED
@@ -146,11 +146,16 @@ pretty_name: A/B Test Supertext vs DeepL
146
  size_categories:
147
  - 1K<n<10K
148
  ---
149
- # Dataset Card for Dataset Name
150
 
151
  We release all evaluation data and scripts for further analysis and reproduction of the accompanying paper: [A comparison of translation performance between DeepL and Supertext](https://arxiv.org/abs/2502.02577).
152
  The data consists of document-level translations by Supertext and DeepL as well as accompanying ratings by professional translators. Please find more details in the paper.
153
 
 
 
 
 
 
154
  ## Dataset Details
155
 
156
  As strong machine translation (MT) systems are increasingly based on large language models (LLMs), reliable quality benchmarking requires methods that capture their ability to leverage extended context. This study compares two commercial MT systems -- DeepL and Supertext -- by assessing their performance on unsegmented texts. We evaluate translation quality across four language directions with professional translators assessing segments with full document-level context. While segment-level assessments indicate no strong preference between the systems in most cases, document-level analysis reveals a preference for Supertext in three out of four language directions, suggesting superior consistency across longer texts. We advocate for more context-sensitive evaluation methodologies to ensure that MT quality assessments reflect real-world usability.
 
146
  size_categories:
147
  - 1K<n<10K
148
  ---
149
+ # A/B Test Supertext vs DeepL
150
 
151
  We release all evaluation data and scripts for further analysis and reproduction of the accompanying paper: [A comparison of translation performance between DeepL and Supertext](https://arxiv.org/abs/2502.02577).
152
  The data consists of document-level translations by Supertext and DeepL as well as accompanying ratings by professional translators. Please find more details in the paper.
153
 
154
+ ``` python
155
+ # for each language pair, there is a separate subset
156
+ data = load_dataset("Supertext/mt-doclevel-ab-test", "en-deCH")
157
+ ```
158
+
159
  ## Dataset Details
160
 
161
  As strong machine translation (MT) systems are increasingly based on large language models (LLMs), reliable quality benchmarking requires methods that capture their ability to leverage extended context. This study compares two commercial MT systems -- DeepL and Supertext -- by assessing their performance on unsegmented texts. We evaluate translation quality across four language directions with professional translators assessing segments with full document-level context. While segment-level assessments indicate no strong preference between the systems in most cases, document-level analysis reveals a preference for Supertext in three out of four language directions, suggesting superior consistency across longer texts. We advocate for more context-sensitive evaluation methodologies to ensure that MT quality assessments reflect real-world usability.