--- license: apache-2.0 task_categories: - summarization language: - en --- # πŸ“š Clip-CC Dataset **Clip-CC** is a high-quality dataset of 200 short educational videos (1 minute 30 seconds each) paired with human-written summaries. It’s designed for tasks like video summarization, video-language alignment, and multimodal learning. --- ## 🧾 Dataset Summary - 🎞️ 200 videos: Named `1.mp4` to `200.mp4` - πŸ“ Each video has a concise human-written summary - πŸ“‚ File references stored in a JSONL file (`metadata.jsonl`) with paths to each video --- ## πŸ’‘ Use Cases - Video summarization and caption generation - Vision-Language alignment - Video QA and downstream educational AI tasks - Fine-tuning multimodal models (e.g., Flamingo, Video-BERT, LLaVA) --- ## πŸ“¦ Dataset Structure Each entry has: | Field | Description | |------------|------------------------------------------| | `id` | Unique ID, e.g., `001` | | `file_name`| Relative path to video file | | `summary` | Human-written summary of video content | ### πŸ” Example ```json { "id": "001", "file_name": "clips/1.mp4", "summary": "An introduction to gravitational force using animations." }