WSC-Train / README.md
ASLP-lab's picture
Update README.md
4a1f764 verified
metadata
license: apache-2.0

WenetSpeech-Chuan: A Large-Scale Sichuanese Corpus With Rich Annotation For Dialectal Speech Processing

Yuhang Dai1,*, Ziyu Zhang1,*, Shuai Wang4,5, Longhao Li1, Zhao Guo1, Tianlun Zuo1, Shuiyuan Wang1, Hongfei Xue1, Chengyou Wang1, Qing Wang3, Xin Xu2, Hui Bu2, Jie Li3, Jian Kang3, Binbin Zhang5, Lei Xie1,╀

1 Audio, Speech and Language Processing Group (ASLP@NPU), Northwestern Polytechnical University
2 Beijing AISHELL Technology Co., Ltd.
3 Institute of Artificial Intelligence (TeleAI), China Telecom
4 School of Intelligence Science and Technology, Nanjing University
5 WeNet Open Source Community

📑 Paper    |    🐙 GitHub    |    🤗 HuggingFace
🎤 Demo Page    |    💬 Contact Us

Dataset

WenetSpeech-Chuan Overview

  • Contains 10,000 hours of large-scale Chuan-Yu dialect speech corpus with rich annotations, the largest open-source resource for Chuan-Yu dialect speech research.
  • Stores metadata in a single JSON file, including audio path, duration, text confidence, speaker identity, SNR, DNSMOS, age, gender, and character-level timestamps. Additional metadata tags may be added in the future.
  • Covers ten domains: Short videos, Entertainment, Live streams, Documentary, Audiobook, Drama, Interview, News and others.

Metadata Format

We store all audio metadata in a standardized JSON format, where the core fields include utt_id (unique identifier for each audio segment), rover_result (ROVER result of three ASR transcriptions), confidence (confidence score of text transcription), jyutping_confidence (confidence score of Cantonese pinyin transcriptions), and duration (audio duration); speaker attributes include speaker_id, gender, and age; audio quality assessment metrics include sample_rate, DNSMOS, and SNR; timestamp information includes timestamp (precisely recording segment boundaries with start and end); and extended metadata under the meta_info field includes program (program name), region (geographical information), link (original content link), and domain (domain classification).

📂 Content Tree

WenetSpeech-Chuan
├── metadata.jsonl
├── .gitattributes
└── README.md

Data sample:

metadata.jsonl

{
"utt": 音频id,
"filename":音频文件名(type: str),
"text": 转录抄本(type: str),
"domain": 参考领域信息(type: list[str]),
"gender": 说话人性别(type: str),
"age": 说话人年龄标签 (type: int范围, eg: 中年(36~59)),
"wvmos": 音频质量分数(type: float),
"confidence": 转录文本置信度(0-1)(type: str),
"emotion": 说话人情感标签 (type: str,eg: 愤怒),
}

example:

{
"utt": "013165495633_09mNC_9_5820",
"filename": "013165495633_09mNC_9_5820.wav",
"text": "还是选二手装好了的别墅诚心入如意的直接入住的好好",
"domain": [
"短视频"
],
"gender": "Male",
"age": "YOUTH",
"wvmos": 2.124380588531494,
"confidence": 0.8333,
"emotion": angry,
}

WenetSpeech Usage

You can obtain the original video source through the link field in the metadata file (metadata.json). Segment the audio according to the timestamps field to extract the corresponding record. For pre-processed audio data, please contact us using the information provided below.

Contact

If you have any questions or would like to collaborate, feel free to reach out to our research team via email: yhdai@mail.nwpu.edu.cn or ziyu_zhang@mail.nwpu.edu.cn.

You’re also welcome to join our WeChat group for technical discussions, updates, and — as mentioned above — access to pre-processed audio data.

WeChat Group QR Code Scan to join our WeChat discussion group

Official Account QR Code