| | --- |
| | language: |
| | - en |
| | license: apache-2.0 |
| | size_categories: |
| | - 1M<n<10M |
| | task_categories: |
| | - video-text-to-text |
| | configs: |
| | - config_name: Live-CC-5M for Dataset Viewer |
| | data_files: |
| | - split: preview_first_100 |
| | path: live_cc_100_for_preview.json |
| | - split: full_5m |
| | path: live_cc_5m_with_seeks.jsonl |
| | --- |
| | |
| | # Dataset Card for Live-CC-5M |
| |
|
| |  |
| |
|
| | ## Dataset Description |
| | - **Curated by:** Joya Chen |
| | - **Language(s) (NLP):** English |
| | - **License:** Apache License 2.0 |
| |
|
| | ## Uses |
| | This dataset is used for [LiveCC-7B-Base](https://huggingface.co/chenjoya/LiveCC-7B-Instruct) model pre-training. We only allow the use of this dataset for academic research and educational purposes. For OpenAI GPT-4o generated user prompts, we recommend users check the [OpenAI Usage Policy](https://openai.com/policies/usage-policies/). |
| |
|
| | - **Project Page**: https://showlab.github.io/livecc |
| | - **Paper**: https://huggingface.co/papers/2504.16030 |
| |
|
| | ### Live-CC-5M Dataset |
| | |
| | - Statistics: 5,047,208 YouTube Video-CC 30~240s samples. |
| |  |
| | |
| | - Annotation JSONL (YouTube CC): |
| |
|
| | Each line of the JSONL file is organized in a common user/assistant conversation format with a special "text_stream" key. Example: |
| | ``` |
| | [ |
| | {"role": "user", "content": [{"type": "video", "video": "video/youtube/-4dnPeRv1ns.mp4", "video_start": 16.8, "video_end": 158.8}, {"type": "text", "text": "", "previous": "", "title": "Airsoft G&G Combat Machine M4 Review"}]}, |
| | {"role": "assistant", "content": [{"type": "text_stream", "text_stream": [[16.8, 16.9, "all"], [16.9, 17.0, "right"], [17.0, 17.1, "you"], [17.1, 17.3, "guys"], [17.3, 17.4, "so"], [17.4, 17.5, "this"], ...]}]} |
| | ] |
| | ``` |
| | - "title" denotes the YouTube title. |
| | - "previous" denotes previous ASR content before "video_start". |
| | - Each item in "text_stream" indicates start timestamp, end timestamp, and the word. |
| | |
| | During pre-training, we use "title" and "previous" as context. Please refer to our dataloader (https://github.com/showlab/livecc/data/lmm_dataset.py) to learn how to make it compatible with popular LMMs (e.g. QwenVL series). |
| | |
| | The last line of JSONL contains the file handle seek indices: |
| | ``` |
| | b'[0, 3149, 7796, 10436, 18949, 22917, 41985, 65721, 73045, 76797, 82262, ...]' |
| | ``` |
| | This allows for easy streaming loading access using: |
| | |
| | ```python |
| | import json |
| | |
| | # read the last line of jsonl |
| | def readlastline(path: str): |
| | with open(path, "rb") as f: |
| | f.seek(-2, 2) # avoid last |
| | |
| | while f.read(1) != b" |
| | ": |
| | f.seek(-2, 1) |
| | return f.readline() |
| | |
| | # parse to seek indices list |
| | seeks = json.loads(readlastline('live_cc_5m_with_seeks.jsonl')) |
| | |
| | # during data loader |
| | def __getitem(self, index): |
| | ... |
| | with open('live_cc_5m_with_seeks.jsonl') as f: |
| | f.seek(seeks[index]) |
| | datum = json.loads(f.readline()) |
| | ... |
| | ``` |
| | |
| | - Videos: Due to 5M videos are too large, we are sorry that we cannot find way to share them. But, |
| | - You can find all YouTube IDs in the annotation JSONL |
| | - We have released video files in SFT dataset https://huggingface.co/datasets/chenjoya/Live-WhisperX-526K |
| | |
| |
|
| | ### Data Production Pipeline |
| |
|
| |  |
| |
|
| | Please read the paper Section3 for details. They have been fully open-sourced at: https://github.com/showlab/livecc/data/production/pretrain |
| |
|
| | ## Citation |
| |
|
| | If you find our work helpful, feel free to give us a cite ;) |
| |
|
| | ```bibtex |
| | @article{livecc, |
| | author = {Joya Chen and Ziyun Zeng and Yiqi Lin and Wei Li and Zejun Ma and Mike Zheng Shou}, |
| | title = {LiveCC: Learning Video LLM with Streaming Speech Transcription at Scale}, |
| | journal = {arXiv preprint arXiv:2504.16030} |
| | year = {2025}, |
| | } |
| | ``` |
| |
|
| | ## Contact |
| |
|
| | [Joya Chen](https://chenjoya.github.io/) |