Improve dataset card for Long-TTS-Eval (MGM-Omni Benchmark)

#2
by nielsr HF Staff - opened

This PR significantly improves the dataset card for the Long-TTS-Eval benchmark.

Key changes include:

  • Corrected pretty_name: Updated pretty_name to Long-TTS-Eval to accurately reflect the dataset's name, as linked in the original GitHub README.
  • Enriched Metadata: Added automatic-speech-recognition and text-to-speech to task_categories, and relevant tags like multimodal, speech-generation, speech-understanding, voice-cloning, benchmark, and long-form for better discoverability.
  • Comprehensive Description: Included an introduction to the dataset, its relation to the MGM-Omni paper, and a detailed explanation of its structure, features, and splits.
  • Provided Key Links: Added direct links to the paper, the GitHub repository, and the Hugging Face Space demo.
  • Included Paper Abstract: The full abstract of the associated paper is now part of the card, offering immediate context.
  • Highlighted Main Properties: Incorporated the main properties of the MGM-Omni system, showcasing the capabilities that this dataset helps evaluate.
  • Added Sample Usage: Provided practical code snippets for "Zero-Shot Voice Cloning" and "Chat as an Omni chatbot (Text Input)", taken directly from the GitHub README, to guide users on interacting with the associated models.
  • Presented Evaluation Results: Included the relevant evaluation tables from the GitHub README, illustrating the dataset's role as a benchmark.
  • Complete Citation: Added all BibTeX entries as provided in the original GitHub repository.

These updates provide a much richer, more accurate, and informative resource for users interested in the Long-TTS-Eval dataset and the MGM-Omni project.

wcy1122 changed pull request status to merged

Sign up or log in to comment