moznlp-MT-QE / README.md
felerminoali's picture
Update README.md
694f12a verified
metadata
license: cc-by-4.0
language:
  - nya
  - ts
  - seh
  - pt

MOZNLP-UP Direct Assessment and Evaluation Dataset

Description

This dataset contains direct assessment data collected and curated during machine translation dataset creation and model evaluation for Mozambican languages. The data is intended for evaluating the quality and accuracy of translation and speech models.

The dataset is also part of SSA-COMET (McGill-NLP), a framework for automatic evaluation of African languages, enabling researchers to benchmark and fine-tune evaluation models for the target languages of this dataset.

  • Emakhuwa (pt-vmw)
  • Xichangana (pt-ts) - Mozambican variant
  • Nyanja (pt-nya) - Mozambican variant
  • Sena (pt-seh) - Mozambican variant

Tasks

  • Translation quality assessment

License

CC BY 4.0

Dataset Columns

  • imported_text_source_id: Unique identifier for the original source text segment.
  • src_text: Source text in the original language.
  • src_lang: Language code of the source text.
  • tgt_lang: Language code of the target translation.
  • ref_text: Reference translation produced by professional translators.
  • mt_text: Machine-translated output being evaluated.
  • score_annotator1: Human-assigned adequacy quality score from annotator 1 (0-100).
  • adequacy_issues_annotator1: Notes from annotator 1 describing translation adequacy issues.
  • score_annotator3:Human-assigned adequacy quality score from annotator 3 (0-100).
  • adequacy_issues_annotator3: Notes from annotator 3 describing translation adequacy issues.

Related Work

Citation

@inproceedings{li-etal-2025-evaluating-wmt,
    title = "Evaluating {WMT} 2025 Metrics Shared Task Submissions on the {SSA}-{MTE} {A}frican Challenge Set",
    author = "Li, Senyu  and
      Ali, Felermino Dario Mario  and
      Wang, Jiayi  and
      Sousa-Silva, Rui  and
      Lopes Cardoso, Henrique  and
      Stenetorp, Pontus  and
      Cherry, Colin  and
      Adelani, David Ifeoluwa",
    editor = "Haddow, Barry  and
      Kocmi, Tom  and
      Koehn, Philipp  and
      Monz, Christof",
    booktitle = "Proceedings of the Tenth Conference on Machine Translation",
    month = nov,
    year = "2025",
    address = "Suzhou, China",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2025.wmt-1.65/",
    doi = "10.18653/v1/2025.wmt-1.65",
    pages = "913--919",
    ISBN = "979-8-89176-341-8",
    abstract = "This paper presents the evaluation of submissions to the WMT 2025 Metrics Shared Task on the SSA-MTE challenge set, a large-scale benchmark for machine translation evaluation (MTE) in Sub-Saharan African languages. The SSA-MTE test sets contains over 12,768 human-annotated adequacy scores across 11 language pairs sourced from English, French, and Portuguese, spanning 6 commercial and open-source MT systems. Results show that correlations with human judgments remain generally low, with most systems falling below the 0.4 Spearman threshold for medium-level agreement. Performance varies widely across language pairs, with most correlations under 0.4; in some extremely low-resource cases, such as Portuguese{--}Emakhuwa, correlations drop to around 0.1, underscoring the difficulty of evaluating MT for very low-resource African languages. These findings highlight the urgent need for more research on robust, generalizable MT evaluation methods tailored for African languages."
}
@inproceedings{li-etal-2025-ssa,
    title = "{SSA}-{COMET}: Do {LLM}s Outperform Learned Metrics in Evaluating {MT} for Under-Resourced {A}frican Languages?",
    author = "Li, Senyu  and
      Wang, Jiayi  and
      Ali, Felermino D. M. A.  and
      Cherry, Colin  and
      Deutsch, Daniel  and
      Briakou, Eleftheria  and
      Sousa-Silva, Rui  and
      Lopes Cardoso, Henrique  and
      Stenetorp, Pontus  and
      Adelani, David Ifeoluwa",
    editor = "Christodoulopoulos, Christos  and
      Chakraborty, Tanmoy  and
      Rose, Carolyn  and
      Peng, Violet",
    booktitle = "Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing",
    month = nov,
    year = "2025",
    address = "Suzhou, China",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2025.emnlp-main.656/",
    doi = "10.18653/v1/2025.emnlp-main.656",
    pages = "12990--13009",
    ISBN = "979-8-89176-332-6",
    abstract = "Evaluating machine translation (MT) quality for under-resourced African languages remains a significant challenge, as existing metrics often suffer from limited language coverage and poor performance in low-resource settings. While recent efforts, such as AfriCOMET, have addressed some of the issues, they are still constrained by small evaluation sets, a lack of publicly available training data tailored to African languages, and inconsistent performance in extremely low-resource scenarios. In this work, we introduce SSA-MTE, a large-scale human-annotated MT evaluation (MTE) dataset covering 13 African language pairs from the News domain, with over 63,000 sentence-level annotations from a diverse set of MT systems. Based on this data, we develop SSA-COMET and SSA-COMET-QE, improved reference-based and reference-free evaluation metrics. We also benchmark prompting-based approaches using state-of-the-art LLMs like GPT-4o and Claude. Our experimental results show that SSA-COMET models significantly outperform AfriCOMET and are competitive with the strongest LLM (Gemini 2.5 Pro) evaluated in our study, particularly on low-resource languages such as Twi, Luo, and Yoruba. All resources are released under open licenses to support future research."
}