Datasets:
Tasks:
Audio Classification
Modalities:
Audio
Formats:
soundfolder
Languages:
Chinese
Size:
1K - 10K
License:
| license: cc-by-4.0 | |
| task_categories: | |
| - audio-classification | |
| language: | |
| - zh | |
| size_categories: | |
| - 1K<n<10K | |
| # Chinese Speech Emotional Understanding Benchmark (CSEU-Bench) | |
| - The benchmark aims to evaluate the ability of understanding psycho-linguistic emotion labels in Chinese speech. It contains Chinese speech audios with diverse syntactic structures, and 83 psycho-linguistic emotion entities as classification labels. | |
| - Github: https://github.com/qiuchili/CSEU-Bench | |
| # CSEU-Bench Components: | |
| - `CSEU-Bench.csv`: all speech samples | |
| - `CSEU-monosyllabic.csv`: speech samples with single-syllable words | |
| - `CSEU-bisyllabic.csv`: speech samples with two-syllable words | |
| - `CSEU-short-sentence.csv`: speech samples with short sentences | |
| - `CSEU-discourse.csv`: discourse speech samples | |
| # Columns in data files: | |
| - `target`: speech scripts | |
| - `target_audio`: speech audio file paths | |
| - `sample_type`: syntactic structure of speech. monosyllabic, bisyllabic, short-sentence or discourse. | |
| - `judgment`: 8 human judgment labels for each sample. For all labels, refer to utils/const.py in https://github.com/qiuchili/CSEU-Bench. | |
| - `literal_sentiment`: Binary values indicating whether each speech audio is neutral by literal meaning. Only applies to discourse samples. | |
| - `target_attitude`: golden speech emotion labels. For all labels, refer to utils/const.py in https://github.com/qiuchili/CSEU-Bench. | |
| # Usage | |
| - For dataset loading, use python `pandas`: | |
| ```Python | |
| import pandas as pd | |
| df = pd.read_csv("CSEU-xxx.csv") | |
| ``` | |
| - For running the experiments, please refer to https://github.com/qiuchili/CSEU-Bench. | |