metadata
license: apache-2.0
task_categories:
- summarization
language:
- en
π Clip-CC Dataset (Movie Clips Edition)
Clip-CC is a curated dataset of 200 movie clips sourced from YouTube, each paired with a human-written summary and each video clipped to 1 minute and 30 second. It is designed for tasks such as video summarization, multimodal learning, and video-language alignment using real-world entertainment content.
π§Ύ Dataset Summary
- π₯ 200 movie clips from well-known films
- πΊ Each clip is referenced via a YouTube link
- π Each clip includes a human-written summary describing the scene
- πΎ All metadata is stored in a .jsonl file (metadata.jsonl)
π‘ Use Cases
- Video summarization and caption generation
- Vision-Language alignment
- Video QA and downstream educational AI tasks
- Fine-tuning multimodal models (e.g., Flamingo, Video-BERT, LLaVA)
π¦ Dataset Structure
Each entry has:
| Field | Description |
|---|---|
id |
Unique ID, e.g., 001 |
file_link |
YouTube link to the movie clip |
summary |
Human-written summary of video content |
π Example
{
"id": "001",
"file_link": "https://www.youtube.com/watch?v=zRFatzj_5do",
"summary": "A musical performance scene from Pitch Perfect showcasing a lively rendition of 'I've Got the Magic in Me'."
}