|
|
--- |
|
|
license: apache-2.0 |
|
|
task_categories: |
|
|
- image-to-text |
|
|
- visual-question-answering |
|
|
- text-to-image |
|
|
language: |
|
|
- en |
|
|
tags: |
|
|
- image-to-text reasoning |
|
|
- sequential |
|
|
- human-annotated |
|
|
- multimodal |
|
|
- vision-language |
|
|
- movie |
|
|
- scenes |
|
|
- video |
|
|
- frames |
|
|
pretty_name: StoryFrames |
|
|
size_categories: |
|
|
- 1K<n<10K |
|
|
--- |
|
|
|
|
|
# The StoryFrames Dataset |
|
|
[StoryFrames](https://arxiv.org/abs/2502.19409) is a human-annotated dataset created to enhance a model's capability of understanding and reasoning over sequences of images. |
|
|
It is specifically designed for tasks like generating a description for the next scene in a story based on previous visual and textual information. |
|
|
The dataset repurposes the [StoryBench dataset](https://arxiv.org/abs/2308.11606), a video dataset originally designed to predict future frames of a video. |
|
|
StoryFrames subsamples frames from those videos and pairs them with annotations for the task of _next-description prediction_. |
|
|
Each "story" is a sample of the dataset and can vary in length and complexity. |
|
|
|
|
|
The dataset contains 8,881 samples, divided into train and validation splits. |
|
|
 |
|
|
|
|
|
If you want to work with a specific context length (i.e., number of scenes per story), you can filter the dataset as follows: |
|
|
|
|
|
```python |
|
|
from datasets import load_dataset |
|
|
|
|
|
ds = load_dataset("ingoziegler/StoryFrames") |
|
|
|
|
|
# to work with stories containing 3 scenes |
|
|
ds_3 = ds.filter(lambda sample: sample["num_scenes"] == 3) |
|
|
``` |
|
|
|
|
|
## What Is a Story in StoryFrames? |
|
|
* **A story is a sequence of scenes:** |
|
|
Each story is composed of multiple scenes, where each scene is a part of the overall narrative. |
|
|
|
|
|
* **Scenes consist of two main components:** |
|
|
* **Images**: Each scene is made up of several frames (images) that have been subsampled from the original video. |
|
|
* **Scene Description**: There is a single textual description for each scene (i.e., one or more images) that captures the plot of the scene. |
|
|
|
|
|
## How Is the Data Organized? |
|
|
* **Temporal Markers:** |
|
|
* `start_times` and `end_times`: These fields provide the time markers indicating when each scene begins and ends in the video. They define the boundaries of each scene. |
|
|
|
|
|
* **Frame Subsampling:** |
|
|
* `subsampled_frames_per_scene`: For each scene, a list of frame timestamps is provided. Each timestamp is formatted to show the second and millisecond (for example, `frame_sec.millisec` would be `frame_1.448629`). These timestamps indicate which frames were selected from the scene. |
|
|
|
|
|
* **Image Data:** |
|
|
* `scenes`: In a structure that mirrors the subsampled timestamps, this field contains the actual images that were extracted. The images are organized as a list of lists: each inner list corresponds to one scene and contains the images in the order they were sampled. |
|
|
|
|
|
* **Narrative Descriptions:** |
|
|
* `sentence_parts`: This field contains a list of strings. Each string provides a description for one scene in the story. Even though a scene is made up of multiple images, the corresponding description captures the plot progression over all images of that scene. |
|
|
|
|
|
## Detailed Field Descriptions |
|
|
* `sentence_parts` |
|
|
* Type: `List[str]` |
|
|
* A narrative breakdown where each entry describes one scene. |
|
|
|
|
|
* `start_times` |
|
|
* `List[float]` |
|
|
* A list of timestamps marking the beginning of each scene. |
|
|
|
|
|
* `end_times` |
|
|
* Type: `List[float]` |
|
|
* A list of timestamps marking the end of each scene. |
|
|
|
|
|
* `background_description` |
|
|
* Type: `str` |
|
|
* A brief summary of the overall setting or background of the story. |
|
|
|
|
|
* `video_name` |
|
|
* Type: `str` |
|
|
* The identifier or name of the source video. |
|
|
* This is not a unique identifier for stories as a video can contain multiple stories that are annotated separately. |
|
|
|
|
|
* `question_info` |
|
|
* Type: `str` |
|
|
* Additional information used together with the video name to uniquely identify each story. |
|
|
|
|
|
* `story_id` |
|
|
* Type: `str` |
|
|
* Automatically generated by combining `video_name` and `question_info` (e.g., "video_name---question_info") to create a unique identifier for each story. |
|
|
|
|
|
* `num_actors_in_video` |
|
|
* Type: `int` |
|
|
* The number of actors present in the video. |
|
|
|
|
|
* `subsampled_frames_per_scene` |
|
|
* Type: `List[List[float]]` |
|
|
* Each inner list contains the timestamps (formatted as `frame_sec.millisec`, e.g., `frame_1.448629`) for the frames that were selected from a scene. |
|
|
* Each position of the inner lists correspond to the position of the description in `sentence_parts` and `scenes`, |
|
|
* The number of inner lists corresponds to the number of available `scenes`, as marked in `num_scenes`. |
|
|
|
|
|
* `scenes` |
|
|
* Type: `List[List[Image]]` |
|
|
* Each inner list holds the actual frames (images) that were subsampled from a scene. |
|
|
* The structure of this field directly corresponds to that of `subsampled_frames_per_scene`. |
|
|
* Each position of the inner lists correspond to the position of the description in `sentence_parts` and `subsampled_frames_per_scene`. |
|
|
|
|
|
* `num_scenes` |
|
|
* Type: `int` |
|
|
* The total number of scenes in the story. |
|
|
|
|
|
* `caption` |
|
|
* Type: `str` |
|
|
* An optional caption for the sample. |
|
|
* This may be empty if no caption was provided. |
|
|
|
|
|
* `sentence_parts_nocontext` |
|
|
* Type: `List[str]` |
|
|
* A variant of the scene descriptions that excludes sequential context. |
|
|
* This may be empty if no annotation was provided. |
|
|
|
|
|
## Citation |
|
|
The dataset was introduced as part of the following paper: |
|
|
|
|
|
[ImageChain: Advancing Sequential Image-to-Text Reasoning in Multimodal Large Language Models](https://arxiv.org/abs/2502.19409) |
|
|
|
|
|
If you use it in your research or applications, please cite the following paper: |
|
|
|
|
|
``` |
|
|
@misc{villegas2025imagechainadvancingsequentialimagetotext, |
|
|
title={ImageChain: Advancing Sequential Image-to-Text Reasoning in Multimodal Large Language Models}, |
|
|
author={Danae Sánchez Villegas and Ingo Ziegler and Desmond Elliott}, |
|
|
year={2025}, |
|
|
eprint={2502.19409}, |
|
|
archivePrefix={arXiv}, |
|
|
primaryClass={cs.CV}, |
|
|
url={https://arxiv.org/abs/2502.19409}, |
|
|
} |
|
|
``` |
|
|
|