Tadesse commited on
Commit
eeba1e6
·
verified ·
1 Parent(s): 96d1e7e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +0 -92
README.md CHANGED
@@ -114,95 +114,3 @@ size_categories:
114
  - 10K<n<100K
115
  license: mit
116
  ---
117
-
118
- # Multilingual COMET Translation Quality Scores
119
-
120
- This dataset contains COMET (Crosslingual Optimized Metric for Evaluation of Translation) quality scores for English-to-multilingual translations of the GSM8K dataset.
121
-
122
- ## Dataset Description
123
-
124
- The dataset includes translation quality scores for 17 language pairs, all with English as the source language:
125
-
126
- - **English → Amharic** (አማርኛ)
127
- - **English → Ewe** (Eʋegbe)
128
- - **English → French** (Français)
129
- - **English → Hausa** (Harshen Hausa)
130
- - **English → Igbo** (Asụsụ Igbo)
131
- - **English → Kinyarwanda** (Ikinyarwanda)
132
- - **English → Lingala** (Lingála)
133
- - **English → Luganda** (Luganda)
134
- - **English → Oromo** (Afaan Oromoo)
135
- - **English → Shona** (chiShona)
136
- - **English → Southern Sotho** (Sesotho)
137
- - **English → Swahili** (Kiswahili)
138
- - **English → Twi** (Twi)
139
- - **English → Wolof** (Wolof)
140
- - **English → Xhosa** (isiXhosa)
141
- - **English → Yoruba** (Yorùbá)
142
- - **English → Zulu** (isiZulu)
143
-
144
- ## Model Used
145
-
146
- The scores were generated using the **McGill-NLP/ssa-comet-qe** model, which is specifically designed for quality estimation of translations involving Sub-Saharan African languages.
147
-
148
- ## Data Format
149
-
150
- Each CSV file contains three columns:
151
- - `english`: Original English text
152
- - `{target_language}`: Translated text in the target language
153
- - `score`: COMET quality score (higher scores indicate better translation quality)
154
-
155
- ## Usage
156
-
157
- ```python
158
- from datasets import load_dataset
159
-
160
- # Load specific language pair
161
- dataset = load_dataset("Tadesse/COMET_score", "english_swahili")
162
-
163
- # Load all language pairs
164
- configs = ["english_amharic", "english_ewe", "english_french", "english_hausa",
165
- "english_igbo", "english_kinyarwanda", "english_lingala", "english_luganda",
166
- "english_oromo", "english_shona", "english_sotho", "english_swahili",
167
- "english_twi", "english_wolof", "english_xhosa", "english_yoruba", "english_zulu"]
168
-
169
- all_datasets = {}
170
- for config in configs:
171
- all_datasets[config] = load_dataset("Tadesse/COMET_score", config)
172
- ```
173
-
174
- ## Score Statistics
175
-
176
- COMET scores typically range from 0 to 1, where:
177
- - **0.8-1.0**: Excellent translation quality
178
- - **0.6-0.8**: Good translation quality
179
- - **0.4-0.6**: Moderate translation quality
180
- - **0.2-0.4**: Poor translation quality
181
- - **0.0-0.2**: Very poor translation quality
182
-
183
- ## Source Data
184
-
185
- The translations are based on the GSM8K dataset (`israel/translated_gsm8k`), which contains grade school math word problems translated into multiple languages.
186
-
187
- ## Citation
188
-
189
- If you use this dataset in your research, please cite:
190
-
191
- ```bibtex
192
- @dataset{multilingual_comet_scores,
193
- title={Multilingual COMET Translation Quality Scores},
194
- author={[Your Name]},
195
- year={2025},
196
- url={https://huggingface.co/datasets/Tadesse/COMET_score}
197
- }
198
- ```
199
-
200
- ## License
201
-
202
- This dataset is released under the MIT License.
203
-
204
- ## Acknowledgments
205
-
206
- - Original GSM8K dataset creators
207
- - McGill NLP Lab for the COMET model
208
- - Hugging Face for hosting the translated GSM8K dataset
 
114
  - 10K<n<100K
115
  license: mit
116
  ---