File size: 6,369 Bytes
d19ef21
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b420540
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d19ef21
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
---
license: other
license_name: derivative-mixed
license_link: LICENSE
task_categories:
  - visual-question-answering
  - video-classification
tags:
  - video
  - mcqa
  - vqa
  - video-generation
  - wan2.2
  - i2v
  - vbvr
size_categories:
  - 1K<n<10K
---

# Video-MCP

**Video-MCP** is a synthetic video dataset for training and evaluating video generation models on **multiple-choice question-answering (MCQA)** tasks. Each sample is a short video clip (~5 seconds) where a visual question-answering prompt is embedded directly into the video frames, and the correct answer is revealed by progressively highlighting one of four answer boxes (A/B/C/D) over the duration of the clip.

The dataset is designed for fine-tuning image-to-video models (specifically **Wan2.2-I2V-A14B**) to produce videos that "answer" visual questions by highlighting the correct option.

Output follows the **[VBVR DataFactory](https://github.com/video-reason/VBVR-DataFactory)** directory convention.

## Examples

Each clip starts with no answer highlighted, then progressively reveals the correct choice over ~5 seconds:

### CoreCognition (M-1) — General Visual Reasoning

| Answer: B | Answer: B |
|---|---|
| ![corecognition 0](examples/corecognition_0.gif) | ![corecognition 1](examples/corecognition_1.gif) |

### ScienceQA (M-2) — Science Education

| Answer: A | Answer: A |
|---|---|
| ![scienceqa 0](examples/scienceqa_0.gif) | ![scienceqa 1](examples/scienceqa_1.gif) |

### MathVision (M-3) — Competition Math

| Answer: A | Answer: D |
|---|---|
| ![mathvision 0](examples/mathvision_0.gif) | ![mathvision 1](examples/mathvision_1.gif) |

### PhyX (M-4) — Physics Reasoning

| Answer: C | Answer: C |
|---|---|
| ![phyx 0](examples/phyx_0.gif) | ![phyx 1](examples/phyx_1.gif) |

## Dataset Details

| Property | Value |
|---|---|
| **Version** | 1.0 |
| **Total samples** | 6,912 |
| **Video resolution** | 832x480 |
| **Frame count** | 81 frames per clip |
| **Frame rate** | 16 FPS |
| **Duration** | ~5.06 seconds per clip |
| **Codec** | H.264, yuv420p, MP4 container |
| **Highlight style** | darken (default) |

## Source Datasets

Video-MCP draws from four publicly available MCQA-VQA datasets on Hugging Face:

| Generator ID | Name | Source | Samples | Domain |
|---|---|---|---|---|
| M-1 | corecognition | `williamium/CoreCognition` | 753 | General visual reasoning |
| M-2 | scienceqa | `derek-thomas/ScienceQA` | 3,905 | Science education (image-only subset) |
| M-3 | mathvision | `MathLLMs/MathVision` | 1,254 | Competition math with diagrams |
| M-4 | phyx | `Cloudriver/PhyX` | 1,000 | Physics reasoning |

All source datasets are filtered to include only samples that have an associated image and exactly four answer choices (A/B/C/D).

## Data Structure

Each sample follows the [VBVR DataFactory](https://github.com/video-reason/VBVR-DataFactory) directory convention:

```
{generator_id}_{name}_data-generator/
  clip_config.json
  {name}_task/
    {name}_{NNNN}/
      first_frame.png        # Frame 0: question visible, no highlight
      prompt.txt             # Plain-text question, choices, and answer
      final_frame.png        # Last frame: correct answer fully highlighted
      ground_truth.mp4       # Full clip with progressive answer reveal
      original/
        question.json        # Structured metadata (JSON)
        <source_image>       # Original image from source dataset
```

### File Descriptions

| File | Description |
|---|---|
| `first_frame.png` | The opening frame showing the question panel (image + question text + four choices) with A/B/C/D answer boxes in the corners. No answer is highlighted. |
| `final_frame.png` | The closing frame with the correct answer box fully highlighted. |
| `ground_truth.mp4` | The complete video clip. The correct answer gradually highlights from frame 1 to the final frame (linear fade-in). |
| `prompt.txt` | Human-readable text: question, choices (A/B/C/D), and the correct answer letter. |
| `original/question.json` | Structured JSON with fields: `dataset`, `source_id`, `question`, `choices`, `answer`, `original_image_filename`. |
| `original/<image>` | The raw source image preserved with its original filename. |
| `clip_config.json` | Generator-level config: `fps`, `seconds`, `num_frames`, `width`, `height`. |

### Frame Layout

Each frame uses a two-column layout:
- **Left column**: the source VQA image, scaled to fill.
- **Right column**: question text and the four answer options.
- **Corners**: A (top-left), B (top-right), C (bottom-left), D (bottom-right) answer boxes.

### prompt.txt Format

```
What color is the object in the image?

A: Red
B: Blue
C: Green
D: Yellow

Answer: A
```

## Video Specifications

These defaults align with **Wan2.2-I2V-A14B** fine-tuning constraints:

- **Resolution**: 832x480 (width and height divisible by 8 for VAE spatial compression)
- **Frames**: 81 (satisfies `1 + 4k` for VAE temporal grid)
- **FPS**: 16
- **Duration**: ~5.06 seconds
- **Codec**: H.264, yuv420p pixel format

## Intended Use

- Fine-tuning image-to-video generation models to produce MCQA-answering videos
- Evaluating video generation models on structured visual reasoning tasks
- Research on embedding structured UI interactions into generated video

## Limitations

- All source questions are filtered to exactly 4 choices (A/B/C/D); questions with fewer or more options are excluded.
- The answer highlight is a simple linear fade-in; no complex visual dynamics.
- Source images and questions inherit any biases or errors from the upstream HF datasets.
- The dataset uses a single fixed resolution (832x480) and frame count (81).

## Citation

If you use this dataset, please cite the source datasets:

- **CoreCognition**: `williamium/CoreCognition` on Hugging Face
- **ScienceQA**: Lu et al., "Learn to Explain: Multimodal Reasoning via Thought Chains for Science Question Answering" (NeurIPS 2022)
- **MathVision**: Wang et al., "MathVision: Measuring Multimodal Mathematical Reasoning with Benchmarks" (2024)
- **PhyX**: `Cloudriver/PhyX` on Hugging Face

## License

This dataset is a derivative work. Each source dataset has its own license terms. Users should verify compliance with upstream licenses before redistribution.

## Generation Code

[https://github.com/video-reason/video-mcp](https://github.com/video-reason/video-mcp)