nielsr's picture
nielsr HF Staff
Update README.md
ec0a7a9 verified
|
raw
history blame
3.02 kB
metadata
dataset_info:
  features:
    - name: segment_id
      dtype: string
    - name: transcription
      dtype: string
    - name: label
      dtype: string
    - name: tempo
      dtype: int64
    - name: note_midi
      sequence: float64
    - name: note_phns
      sequence: string
    - name: note_lyrics
      sequence: string
    - name: note_start_times
      sequence: float64
    - name: note_end_times
      sequence: float64
    - name: phns
      sequence: string
    - name: phn_start_times
      sequence: float64
    - name: phn_end_times
      sequence: float64
    - name: note_midi_length
      dtype: int64
    - name: lyric_word_length
      dtype: int64
  splits:
    - name: train
      num_bytes: 1092803
      num_examples: 833
  download_size: 347719
  dataset_size: 1092803
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
task_categories:
  - text-to-audio
license: cc-by-nc-nd-4.0

SingingSDS Dataset

This repository contains the dataset for SingingSDS: A Singing-Capable Spoken Dialogue System for Conversational Roleplay Applications.

SingingSDS is an innovative role-playing singing dialogue system that seamlessly converts natural speech input into character-based singing output. It integrates automatic speech recognition (ASR), large language models (LLM), and singing voice synthesis (SVS) to create immersive conversational singing experiences. This dataset provides structured annotations, including segment ID, transcription, labels, tempo, MIDI notes, phonemes, lyrics, and their timing information, which are crucial for training and evaluating the SVS components of the SingingSDS system.

Sample Usage (SingingSDS System)

The following examples demonstrate how to use the SingingSDS system via its Command Line Interface (CLI). This showcases how models trained with datasets like this can be applied for inference.

Example Usage

python cli.py \
  --query_audio tests/audio/hello.wav \
  --config_path config/cli/yaoyin_default.yaml \
  --output_audio outputs/yaoyin_hello.wav \
  --eval_results_csv outputs/yaoyin_test.csv

Inference-Only Mode

Run minimal inference without evaluation.

python cli.py \
  --query_audio tests/audio/hello.wav \
  --config_path config/cli/yaoyin_default_infer_only.yaml \
  --output_audio outputs/yaoyin_hello.wav

Parameter Description

  • --query_audio: Input audio file path (required)
  • --config_path: Configuration file path (default: config/cli/yaoyin_default.yaml)
  • --output_audio: Output audio file path (required)
  • --eval_results_csv: Output CSV file path for evaluation results (optional, used in example usage)