SSLD-200 / README.md
nielsr's picture
nielsr HF Staff
Add task category, LeVo paper, project page, code, and sample usage, and remove redundant license
87c9645 verified
|
raw
history blame
3.95 kB
metadata
language:
  - en
  - zh
license: apache-2.0
tags:
  - music
task_categories:
  - automatic-speech-recognition

Song Structure and Lyric Dataset (SSLD-200)

This dataset is used as an evaluation benchmark in the paper LeVo: High-Quality Song Generation with Multi-Preference Alignment.

Project page: https://levo-demo.github.io Code: https://github.com/tencent-ailab/songgeneration

DataSet used to evaluate song structure parsing and lyrics transcription. SSLD-200 consists of 200 songs, 100 English and 100 Chinese, collected entirely from YouTube, with a total duration of 13.9 hours.

The lyric_norm in the format \[structure\]\[start:end\]lyric

  • The structure is the label from StructureAnalysis for the segment.
  • The start and end are the segment’s start and end times.
  • The lyric is the recognized lyrics.

Sample Usage

To use this dataset in conjunction with the associated SongGeneration models, follow these steps for inference.

First, download the required runtime components and a specific model checkpoint:

# Download runtime components
huggingface-cli download lglg666/SongGeneration-Runtime --local-dir ./runtime
mv runtime/ckpt ckpt
mv runtime/third_party third_party

# Download a specific model checkpoint (e.g., SongGeneration-base-new)
huggingface-cli download lglg666/SongGeneration-base-new --local-dir ./songgeneration_base_new

Once setup, you can run the inference script. You need to provide a ckpt_path (the directory where you downloaded the model checkpoint), an input lyrics.jsonl file, and an output_path.

sh generate.sh ckpt_path lyrics.jsonl output_path

Input Format (lyrics.jsonl):

Each line in the .jsonl file represents an individual song generation request and must contain the following fields:

  • idx: A unique identifier for the output song.
  • gt_lyric: The lyrics and song structure, following the format [Structure] Text. For example: [intro-short] ; [verse] These faded memories of us. I can't erase the tears you cried before. Unchained this heart to find its way. My peace won't beg you to stay ; [bridge] If ever your truth still remains. Turn around and see. Life rearranged its games. All these lessons in mistakes. Even years may never erase ; [inst-short] ; [chorus] Like a fool begs for supper. I find myself waiting for her. Only to find the broken pieces of my heart. That was needed for my soul to love again ; [outro-short]
  • descriptions: (Optional) Custom text prompt for generation attributes like gender, timbre, genre, emotion, instrument, and BPM (e.g., "female, dark, pop, sad, piano and drums.").
  • prompt_audio_path: (Optional) Path to a 10-second reference audio file.
  • auto_prompt_audio_type: (Optional) Used if prompt_audio_path is not provided, automatically selects a reference audio based on a given style (e.g., 'Pop', 'R&B', 'Dance', 'Jazz', etc.).

Example command:

sh generate.sh songgeneration_base_new sample/lyrics.jsonl sample/output

Additional flags can be used for specific inference scenarios, such as --low_mem for low-memory inference, --not_use_flash_attn to disable Flash Attention, or --separate to generate separated vocal and accompaniment tracks.

Citation

The dataset itself is detailed in the following work:

@misc{tan2025songpreppreprocessingframeworkendtoend,
      title={SongPrep: A Preprocessing Framework and End-to-end Model for Full-song Structure Parsing and Lyrics Transcription}, 
      author={Wei Tan and Shun Lei and Huaicheng Zhang and Guangzheng Li and Yixuan Zhang and Hangting Chen and Jianwei Yu and Rongzhi Gu and Dong Yu},
      year={2025},
      eprint={2509.17404},
      archivePrefix={arXiv},
      primaryClass={eess.AS},
      url={https://arxiv.org/abs/2509.17404}, 
}