CoVoGER / README.md
huckiyang's picture
Update README.md
251704f verified
metadata
license: cc-by-4.0
language:
  - ar
  - ca
  - cy
  - de
  - en
  - et
  - fa
  - id
  - ja
  - lv
  - sl
  - sv
  - ta
  - tr
  - zh
multilinguality:
  - multilingual
task_categories:
  - automatic-speech-recognition
  - translation
  - text-generation
tags:
  - speech-to-text
  - generative-error-correction
  - n-best-list
  - covost2
  - common-voice

CoVoGER: A Multilingual Multitask Benchmark for Speech-to-text Generative Error Correction with Large Language Models

Dataset Description

Large language models (LLMs) can rewrite the N-best hypotheses from a speech-to-text model, often fixing recognition or translation errors that traditional rescoring cannot. Yet research on generative error correction (GER) has been focusing on monolingual automatic speech recognition (ASR), leaving its multilingual and multitask potential underexplored.

We introduce CoVoGER, a benchmark for GER that covers both ASR and speech-to-text translation (ST) across 15 languages and 28 language pairs. CoVoGER is constructed by decoding Common Voice 20.0 and CoVoST-2 with Whisper of three model sizes and SeamlessM4T of two model sizes, providing 5-best lists obtained via a mixture of beam search and temperature sampling.

Usage and Data Commands

You can easily download and load the dataset using the Hugging Face datasets library in Python.

Terminal/CLI Commands

If you want to download the repository locally via the Hugging Face CLI, run:

# Ensure you have git-lfs installed
git lfs install
git clone https://huggingface.co/datasets/PeacefulData/CoVoGER

Dataset Structure CoVoGER provides 5-best lists generated by standard ASR and ST models (Whisper and SeamlessM4T). The dataset supports ASR and ST tasks for 15 languages and 28 language pairs.

(Please customize the column names below to exactly match your uploaded Parquet/JSONL files.)

  • audio_id: Identifier for the original audio file.

  • source_language: Language of the spoken audio.

  • target_language: Target language for translation (or the same as the source for ASR).

  • n_best_hypotheses: A list of the 5-best transcriptions/translations generated by the base models.

  • reference: The ground truth text.

from datasets import load_dataset

# Load the entire dataset
dataset = load_dataset("PeacefulData/CoVoGER")

# Print the first sample of the train split
print(dataset['train'][0])

References

If you use CoVoGER in your research or the work may be relevant, please consider to cite our EMNLP 2025 paper, thank you!

@inproceedings{yang-etal-2025-covoger,
    title = "{C}o{V}o{GER}: A Multilingual Multitask Benchmark for Speech-to-text Generative Error Correction with Large Language Models",
    author = "Yang, Zhengdong  and
      Wan, Zhen  and
      Li, Sheng  and
      Yang, Chao-Han Huck  and
      Chu, Chenhui",
    editor = "Christodoulopoulos, Christos  and
      Chakraborty, Tanmoy  and
      Rose, Carolyn  and
      Peng, Violet",
    booktitle = "Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing",
    month = nov,
    year = "2025",
    address = "Suzhou, China",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2025.emnlp-main.320/",
    doi = "10.18653/v1/2025.emnlp-main.320",
    pages = "6302--6314",
    ISBN = "979-8-89176-332-6",
    abstract = "Large language models (LLMs) can rewrite the N-best hypotheses from a speech-to-text model, often fixing recognition or translation errors that traditional rescoring cannot. Yet research on generative error correction (GER) has been focusing on monolingual automatic speech recognition (ASR), leaving its multilingual and multitask potential underexplored. We introduce CoVoGER, a benchmark for GER that covers both ASR and speech-to-text translation (ST) across 15 languages and 28 language pairs. CoVoGER is constructed by decoding Common Voice 20.0 and CoVoST-2 with Whisper of three model sizes and SeamlessM4T of two model sizes, providing 5-best lists obtained via a mixture of beam search and temperature sampling. We evaluated various instruction-tuned LLMs, including commercial models in zero-shot mode and open-sourced models with LoRA fine-tuning, and found that the mixture decoding strategy yields the best GER performance in most settings. CoVoGER will be released to promote research on reliable language-universal speech-to-text GER. The code and data for the benchmark are available at https://github.com/N-Orien/CoVoGER."
}