metadata
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: video_url
dtype: string
- name: conversation
list:
- name: content
list:
- name: text
dtype: string
- name: type
dtype: string
- name: role
dtype: string
- name: num_frames
dtype: int64
splits:
- name: train
num_bytes: 578
num_examples: 2
- name: validation
num_bytes: 578
num_examples: 2
download_size: 7696
dataset_size: 1156
eagle0504/llava-video-text-dataset
This is a tiny LLaVA dataset with exactly four video samples for training.
- Field
video_url: Video URLs (MP4/GIF format) - Field
conversation: LLaVA conversation format with user/assistant roles - Field
num_frames: Number of frames per video (5)
Dataset Structure
Each sample contains a conversation in LLaVA format:
{
"video_url": "https://example.com/video.mp4",
"conversation": [
{
"role": "user",
"content": [
{"type": "text", "text": "What is in this video?"},
{"type": "image"},
{"type": "image"},
{"type": "image"},
{"type": "image"},
{"type": "image"}
]
},
{
"role": "assistant",
"content": [{"type": "text", "text": "There is a cat in the video."}]
}
],
"num_frames": 5
}
Usage
from datasets import load_dataset
dataset = load_dataset("eagle0504/llava-video-text-dataset")
Model Compatibility
This dataset is designed for LLaVA models that support video input through multiple image frames.