ExpressiveSpeech / README.md
nielsr's picture
nielsr HF Staff
Improve dataset card: Add metadata, paper abstract, links, and sample usage
a4d4839 verified
|
raw
history blame
8.47 kB
metadata
language:
  - en
  - zh
license: cc-by-nc-sa-4.0
task_categories:
  - audio-classification
  - text-to-speech
tags:
  - audio
  - speech
  - emotion
  - bilingual
  - tts
  - s2s
  - expressiveness
size_categories:
  - 10K<n<100K
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
dataset_info:
  features:
    - name: 'No'
      dtype: int64
    - name: from
      dtype: string
    - name: value
      dtype: string
    - name: emotion
      dtype: string
    - name: length
      dtype: float64
    - name: score_arousal
      dtype: float64
    - name: score_prosody
      dtype: float64
    - name: score_nature
      dtype: float64
    - name: score_expressive
      dtype: float64
    - name: audio-path
      dtype: audio
  splits:
    - name: train
      num_bytes: 4728746481
      num_examples: 28190
  download_size: 12331997848
  dataset_size: 4728746481

ExpressiveSpeech Dataset

Project Webpage | Paper | Code

中文版 (Chinese Version)

Paper Abstract

Recent speech-to-speech (S2S) models generate intelligible speech but still lack natural expressiveness, largely due to the absence of a reliable evaluation metric. Existing approaches, such as subjective MOS ratings, low-level acoustic features, and emotion recognition are costly, limited, or incomplete. To address this, we present DeEAR (Decoding the Expressive Preference of eAR), a framework that converts human preference for speech expressiveness into an objective score. Grounded in phonetics and psychology, DeEAR evaluates speech across three dimensions: Emotion, Prosody, and Spontaneity, achieving strong alignment with human perception (Spearman's Rank Correlation Coefficient, SRCC = 0.86) using fewer than 500 annotated samples. Beyond reliable scoring, DeEAR enables fair benchmarking and targeted data curation. It not only distinguishes expressiveness gaps across S2S models but also selects 14K expressive utterances to form ExpressiveSpeech, which improves the expressive score (from 2.0 to 23.4 on a 100-point scale) of S2S models. Demos and codes are available at this https URL

About The Dataset

ExpressiveSpeech is a high-quality, expressive, and bilingual (Chinese-English) speech dataset created to address the common lack of consistent vocal expressiveness in existing dialogue datasets.

This dataset is meticulously curated from five renowned open-source emotional dialogue datasets: Expresso, NCSSD, M3ED, MultiDialog, and IEMOCAP. Through a rigorous processing and selection pipeline, ExpressiveSpeech ensures that every utterance meets high standards for both acoustic quality and expressive richness. It is designed for tasks in expressive Speech-to-Speech (S2S), Text-to-Speech (TTS), voice conversion, speech emotion recognition, and other fields requiring high-fidelity, emotionally resonant audio.

Key Features

  • High Expressiveness: Achieves a significantly high average expressiveness score of 80.2 by DeEAR, far surpassing the original source datasets.
  • Bilingual Content: Contains a balanced mix of Chinese and English speech, with a language ratio close to 1:1.
  • Substantial Scale: Comprises approximately 14,000 utterances, totaling 51 hours of audio.
  • Rich Metadata: Includes ASR-generated text transcriptions, expressiveness scores, and source information for each utterance.

Dataset Statistics

Metric Value
Total Utterances ~14,000
Total Duration ~51 hours
Languages Chinese, English
Language Ratio (CN:EN) Approx. 1:1
Sampling Rate 16kHz
Avg. Expressiveness Score (DeEAR) 80.2

Our Expressiveness Scoring Tool: DeEAR

The high expressiveness of this dataset was achieved using our screening tool, DeEAR. If you need to build larger batches of high-expressiveness data yourself, you are welcome to use this tool. You can find it on our GitHub.

Sample Usage

To get started with the DeEAR model for inference, follow the steps below from the GitHub repository:

1. Clone the Repository

git clone https://github.com/FreedomIntelligence/ExpressiveSpeech.git
cd ExpressiveSpeech

2. Setup

conda create -n DeEAR python=3.10
conda activate DeEAR
pip install -r requirements.txt
conda install -c conda-forge ffmpeg

3. Prepare

Download the DeEAR_Base model from FreedomIntelligence/DeEAR_Base and place it in the models/DeEAR_Base/ directory.

4. Inference

python inference.py \
    --model_dir ./models \
    --input_path /path/to/audio_folder \
    --output_file /path/to/save/my_scores.jsonl \
    --batch_size 64

Data Format

The dataset is organized as follows:

ExpressiveSpeech/
├── audio/
│   ├── M3ED
│   │    ├── audio_00001.wav
│   │    └── ...
│   ├── NCSSD
│   ├── IEMOCAP
│   ├── MultiDialog
│   └── Expresso
└── metadata.jsonl
  • metadata.jsonl: A jsonl file containing detailed information for each utterance. The metadata includes:
    • audio_path: The relative path to the audio file.
    • value: The ASR-generated text transcription.
    • emotion: Emotion labels from the original datasets.
    • expressiveness_scores: The expressiveness score from the DeEAR model.

JSONL Files Example

Each JSONL line contains a conversations field with an array of utterances.
Example:

{"conversations": [{"No": 9, "from": "user", "value": "Yeah.", "emotion": "happy", "length": 2.027, "score_arousal": 0.9931480884552002, "score_prosody": 0.6800634264945984, "score_nature": 0.9687601923942566, "score_expressive": 0.9892677664756775, "audio-path": "audios/Expresso/splitted_conversation/ex04-ex01/laughing/ex04-ex01_laughing_001/009_speaker1_53s_55s.wav"}, {"No": 10, "from": "assistant", "value": "What was the reason, what was the, why couldn't I get there, ah I forget.", "emotion": "happy", "length": 3.753, "score_arousal": 0.9555678963661194, "score_prosody": 0.6498672962188721, "score_nature": 1.030701756477356, "score_expressive": 0.9965837001800537, "audio-path": "audios/Expresso/splitted_conversation/ex04-ex01/laughing/ex04-ex01_laughing_001/010_speaker2_55s_59s.wav"}]}
{"conversations": [{"No": 10, "from": "user", "value": "What was the reason, what was the, why couldn't I get there, ah I forget.", "emotion": "happy", "length": 3.753, "score_arousal": 0.9555678963661194, "score_prosody": 0.6498672962188721, "score_nature": 1.030701756477356, "score_expressive": 0.9965837001800537, "audio-path": "audios/Expresso/splitted_conversation/ex04-ex01/laughing/ex04-ex01_laughing_001/010_speaker2_55s_59s.wav"}, {"No": 11, "from": "assistant", "value": "Because genie really had to go and and to the bathroom and she couldn't find a place to do it and so she when they put the tent on it it was it was a bad mess and they shouldn't have done that.", "emotion": "happy", "length": 10.649, "score_arousal": 0.976757287979126, "score_prosody": 0.7951533794403076, "score_nature": 0.9789049625396729, "score_expressive": 0.919080913066864, "audio-path": "audios/Expresso/splitted_conversation/ex04-ex01/laughing/ex04-ex01_laughing_001/011_speaker1_58s_69s.wav"}]}

Note: Some source datasets applied VAD, which could split a single utterance into multiple segments. To maintain conversational integrity, we applied rules to merge such segments back into complete utterances.

License

In line with the non-commercial restrictions of its source datasets, the ExpressiveSpeech dataset is released under the CC BY-NC-SA 4.0 license.

You can view the full license here.

Citation

If you use this dataset in your research, please cite our paper:

@article{lin2025decoding,
  title={Decoding the Ear: A Framework for Objectifying Expressiveness from Human Preference Through Efficient Alignment},
  author={Lin, Zhiyu and Yang, Jingwen and Zhao, Jiale and Liu, Meng and Li, Sunzhu and Wang, Benyou},
  journal={arXiv preprint arXiv:2510.20513},
  year={2025}
}