WSC-Train / README.md
ASLP-lab's picture
Update README.md
ecf8e37 verified
|
raw
history blame
2.93 kB
metadata
license: apache-2.0

📂 Content Tree

WenetSpeech-Chuan
├── metadata.jsonl
│
├── audio_labels/
│   ├── wav_utt_id.jsonl
│   ├── wav_utt_id.jsonl
│   ├── ...
│   └── wav_utt_id.jsonl
│
├── .gitattributes
└── README.md

Data sample(CN):

metadata.jsonl

{ "utt_id": 原始长音频id, "wav_utt_id": 转化为wav后的长音频id, "source_audio_path": 原始长音频路径, "audio_labels": 转化后的长音频切分出的短音频标签文件路径, "url": 原始长音频下载链接 }

audio_labels/wav_utt_id.jsonl:

{
"wav_utt_id_timestamp": 以 转化为wav后的长音频id_时间戳信息 作为切分后的短音频id (type: str),
"wav_utt_id_timestamp_path": 短音频数据路径 (type: str),
"audio_clip_id": 该段短音频在长音频中的切分顺序编号,
"timestamp": 时间戳信息,
"wvmos_score": wvmos分数,衡量音频片段质量 (type: float),
"text": 对应时间戳的音频片段的抄本 (type: str),
"text_punc": 带标点的抄本 (type: str),
"spk_num": 音频片段说话人个数,single/multi (type: str)
"confidence": 抄本置信度 (type: float),
"emotion": 说话人情感标签 (type: str,eg: 愤怒),
"age": 说话人年龄标签 (type: int范围, eg: 中年(36~59)),
"gender": 说话人性别标签 (type: str,eg: 男/女),
}

Data sample(EN):

metadata.jsonl

{
"utt_id": Original long audio ID,
"wav_utt_id": Converted long audio ID after transforming to WAV format,
"source_audio_path": Path to the original long audio file,
"audio_labels": Path to the label file of short audio segments cut from the converted long audio,
"url": Download link for the original long audio
}

audio_labels/wav_utt_id.jsonl:

{
"wav_utt_id_timestamp": Short audio segment ID, composed of the converted long audio ID + timestamp information (type: str),
"wav_utt_id_timestamp_path": Path to the short audio data (type: str),
"audio_clip_id": Sequence number of this short segment within the long audio,
"timestamp": Timestamp information,
"wvmos_score": WVMOS score, measuring the quality of the audio segment (type: float),
"text": Transcript of the audio segment corresponding to the timestamp (type: str),
"text_punc": Transcript with punctuation (type: str),
"spk_num": Number of speakers in the audio segment, single/multi (type: str),
"confidence": Confidence score of the transcript (type: float),
"emotion": Speaker’s emotion label (type: str, e.g., anger),
"age": Speaker’s age label (type: int range, e.g., middle-aged (36–59)),
"gender": Speaker’s gender label (type: str, e.g., male/female)
}