--- license: other license_name: derivative-mixed license_link: LICENSE task_categories: - visual-question-answering - video-classification tags: - video - mcqa - vqa - video-generation - wan2.2 - i2v - vbvr size_categories: - 1K # Original image from source dataset ``` ### File Descriptions | File | Description | |---|---| | `first_frame.png` | The opening frame showing the question panel (image + question text + four choices) with A/B/C/D answer boxes in the corners. No answer is highlighted. | | `final_frame.png` | The closing frame with the correct answer box fully highlighted. | | `ground_truth.mp4` | The complete video clip. The correct answer gradually highlights from frame 1 to the final frame (linear fade-in). | | `prompt.txt` | Human-readable text: question, choices (A/B/C/D), and the correct answer letter. | | `original/question.json` | Structured JSON with fields: `dataset`, `source_id`, `question`, `choices`, `answer`, `original_image_filename`. | | `original/` | The raw source image preserved with its original filename. | | `clip_config.json` | Generator-level config: `fps`, `seconds`, `num_frames`, `width`, `height`. | ### Frame Layout Each frame uses a two-column layout: - **Left column**: the source VQA image, scaled to fill. - **Right column**: question text and the four answer options. - **Corners**: A (top-left), B (top-right), C (bottom-left), D (bottom-right) answer boxes. ### prompt.txt Format ``` What color is the object in the image? A: Red B: Blue C: Green D: Yellow Answer: A ``` ## Video Specifications These defaults align with **Wan2.2-I2V-A14B** fine-tuning constraints: - **Resolution**: 832x480 (width and height divisible by 8 for VAE spatial compression) - **Frames**: 81 (satisfies `1 + 4k` for VAE temporal grid) - **FPS**: 16 - **Duration**: ~5.06 seconds - **Codec**: H.264, yuv420p pixel format ## Intended Use - Fine-tuning image-to-video generation models to produce MCQA-answering videos - Evaluating video generation models on structured visual reasoning tasks - Research on embedding structured UI interactions into generated video ## Limitations - All source questions are filtered to exactly 4 choices (A/B/C/D); questions with fewer or more options are excluded. - The answer highlight is a simple linear fade-in; no complex visual dynamics. - Source images and questions inherit any biases or errors from the upstream HF datasets. - The dataset uses a single fixed resolution (832x480) and frame count (81). ## Citation If you use this dataset, please cite the source datasets: - **CoreCognition**: `williamium/CoreCognition` on Hugging Face - **ScienceQA**: Lu et al., "Learn to Explain: Multimodal Reasoning via Thought Chains for Science Question Answering" (NeurIPS 2022) - **MathVision**: Wang et al., "MathVision: Measuring Multimodal Mathematical Reasoning with Benchmarks" (2024) - **PhyX**: `Cloudriver/PhyX` on Hugging Face ## License This dataset is a derivative work. Each source dataset has its own license terms. Users should verify compliance with upstream licenses before redistribution. ## Generation Code [https://github.com/video-reason/video-mcp](https://github.com/video-reason/video-mcp)