Datasets:
license: cc-by-nc-sa-4.0
task_categories:
- automatic-speech-recognition
language:
- zh
pretty_name: Chinese-LiPS
configs:
- config_name: default
data_files:
- split: train
path: meta_train.csv
- split: valid
path: meta_valid.csv
- split: test
path: meta_test.csv
extra_gated_prompt: >-
This dataset is made available for academic and non-commercial research
purposes only. By accessing or using the dataset, you agree to comply with the
following terms and conditions:
1. The dataset may only be used for academic research and educational
purposes. Any commercial use, including but not limited to commercial product
development, commercial speech recognition services, or monetization of the
dataset in any form, is strictly prohibited.
2. The dataset must not be used for any research or applications that may
infringe upon the privacy rights of the recorded participants. Any attempt to
re-identify participants or extract personally identifiable information from
the dataset is strictly prohibited. Researchers must ensure that their use of
the dataset aligns with ethical research practices and institutional review
board (IRB) guidelines where applicable.
3. If a participant (or their legal guardian) requests the removal of their
data from the dataset, all recipients of the dataset must comply by deleting
the affected data from their records. Researchers must acknowledge that such
withdrawal requests may occur and agree to take reasonable steps to ensure
compliance.
4. The dataset, in whole or in part, may not be redistributed, resold, or
shared with any third party. Each researcher or institution must independently
request access to the dataset and agree to these terms.
5. Any published work (e.g., papers, reports, presentations) that uses this
dataset must properly cite the dataset as specified in the accompanying
documentation.
6. You may create derived works (e.g., processed versions of the dataset) for
research purposes, but such derivatives must not be distributed beyond your
research group without prior written permission from the dataset
maintainers.
7. The dataset is provided "as is" without any warranties, express or implied.
The dataset maintainers are not responsible for any direct or indirect
consequences arising from the use of the dataset.
8. Failure to comply with these terms may result in revocation of dataset
access. The dataset maintainers reserve the right to deny access to any
individual or institution found in violation of these terms.
9. If the researcher is employed by a for-profit, commercial entity, the
researcher's employer shall also be bound by these terms and conditions, and
the researcher hereby represents that they are fully authorized to enter into
this agreement on behalf of such employer.
By requesting access to the dataset, you acknowledge that you have read,
understood, and agreed to these terms.
extra_gated_fields:
Name: text
Email: text
Affiliation: text
Position: text
Your Supervisor/manager/director: text
I agree to the Terms of Access: checkbox
I agree to use this dataset for non-commercial use ONLY: checkbox
size_categories:
- 10K<n<100K
Chinese-LiPS: A Chinese audio-visual speech recognition dataset with Lip-reading and Presentation Slides
⭐ Introduction
The Chinese-LiPS dataset is a multimodal dataset designed for audio-visual speech recognition (AVSR) in Mandarin Chinese. This dataset combines speech, video, and textual transcriptions to enhance automatic speech recognition (ASR) performance, especially in educational and instructional scenarios.
🚀 Dataset Details
- Total Duration: 100.84 hours
- Number of Speakers: 207 professional speakers
- Number of Clips: 36,208 video clips
- Audio Format: Stereo WAV, 48 kHz sampling rate
- Video Format:
- Slide Video: 1080p resolution, 30 fps
- Lip-Reading Video: 720p resolution, 30 fps
- Annotations: JSON format with transcriptions and extracted text from slides
Dataset Statistics
| Split | Duration (hrs) | # Segments | # Speakers |
|---|---|---|---|
| Train | 85.37 | 30,341 | 175 |
| Validation | 5.35 | 1,959 | 11 |
| Test | 10.12 | 3,908 | 21 |
| Total | 100.84 | 36,208 | 207 |
📂 Dataset Organization
The dataset is structured into several compressed files:
image.zip: First-frame images from slide videos (used for OCR and vision-language models).
processed_test.zip processed_val.zip processed_train.zip: Processed data with 16 kHz audio, 96×96 25-frame lip-reading videos, and JSON annotations.
train.zip, test.zip, val.zip: Data split into training, testing, and validation sets. Each contains:
├── ID1_age_gender_topic/ │ ├── WAV/ │ │ ├── ID1_age_gender_topic_001.json # Annotation file │ │ ├── ID1_age_gender_topic_001.wav # Audio file (48 kHz) │ ├── PPT/ │ │ ├── ID1_age_gender_topic_001_PPT.mp4 # Slide video (1080p 30fps) │ ├── FACE/ │ │ ├── ID1_age_gender_topic_001_FACE.mp4 # Lip-reading video (720p 30fps) ├── ...meta_all.csv, meta_train.csv, meta_valid.csv, meta_test.csv: Metadata files with ID, TOPIC, WAV, PPT, FACE, and TEXT fields.
The TOPIC field is abbreviated in Chinese as follows: DZJJ = E-sports & Gaming, JKYS = Health & Wellness, KJ = Science & Technology, LY = Travel & Exploration, QC = Automobile & Industry, RWLS = Culture & History, TY = Sports & Competitions, YS = Movies & TV Series, ZX = Others.
meta_test.json: Includes OCR and InternVL2 prompts for the test set.
wav_path: Path to the audio file. ppt_path: Path to the first-frame image of the slide video. ocr_text: Text extracted by PaddleOCR. vl2_text: Text extracted by InternVL2. gt_text: Ground truth transcription of the audio. ocr_vl2_text: OCR text reprocessed by InternVL2 (not a concatenation of PaddleOCR and InternVL2 results).
📥 Download
You can download the dataset from the following sources:
- Download from OneDrive
- Download from Huggingface
- Download from Baidu Netdisk (Password: vg2a)
📚 Citation
@misc{zhao2025chineselipschineseaudiovisualspeech,
title={Chinese-LiPS: A Chinese audio-visual speech recognition dataset with Lip-reading and Presentation Slides},
author={Jinghua Zhao and Yuhang Jia and Shiyao Wang and Jiaming Zhou and Hui Wang and Yong Qin},
year={2025},
eprint={2504.15066},
archivePrefix={arXiv},
primaryClass={cs.MM},
url={https://arxiv.org/abs/2504.15066}
}