|
|
--- |
|
|
dataset_info: |
|
|
features: |
|
|
- name: image |
|
|
dtype: image |
|
|
- name: question |
|
|
dtype: string |
|
|
- name: answer |
|
|
sequence: int64 |
|
|
splits: |
|
|
- name: valid |
|
|
num_bytes: 7094843893.125 |
|
|
num_examples: 14631 |
|
|
- name: train |
|
|
num_bytes: 140854221157.57 |
|
|
num_examples: 289911 |
|
|
download_size: 51389456693 |
|
|
dataset_size: 147949065050.695 |
|
|
configs: |
|
|
- config_name: default |
|
|
data_files: |
|
|
- split: valid |
|
|
path: data/valid-* |
|
|
- split: train |
|
|
path: data/train-* |
|
|
license: cc-by-4.0 |
|
|
language: |
|
|
- en |
|
|
pretty_name: QA Patches Task Dataset |
|
|
task_categories: |
|
|
- image-text-to-text |
|
|
- visual-question-answering |
|
|
--- |
|
|
# Dataset Card for Patch-Based Visual Question Answering Dataset |
|
|
|
|
|
## Dataset Details |
|
|
|
|
|
### Dataset Description |
|
|
|
|
|
This dataset contains approximately 305,000 triplets of `question`, `answer`, and `image` designed for patch-based visual reasoning tasks. |
|
|
|
|
|
A standard question in this dataset is formatted as follows: |
|
|
|
|
|
> Image Grid: The image is divided into a 4x4 grid of 16 equal-sized patches. Patches are numbered sequentially from the top-left corner and moving right, then down to the next row. |
|
|
> Task: Identify the patch number(s) that contain a potted plant. |
|
|
> Response Format: Provide only the relevant patch number(s) as a list (e.g., [3], [5, 12], or [] if none are found). |
|
|
|
|
|
The dataset is built on top of **COCO-2017**, from which object bounding boxes (bboxes) are used to generate questions and answers. |
|
|
|
|
|
- **Curated by:** Yurii Potapov |
|
|
- **Language(s) :** English |
|
|
- **License:** Annotations and code: CC BY 4.0 (COCO), Images: Flickr Terms of Use |
|
|
|
|
|
### Dataset Sources |
|
|
|
|
|
- **Repository:** [Not yet published] |
|
|
- **Paper:** [Not yet published] |
|
|
- **Demo:** [More Information Needed] |
|
|
|
|
|
## Uses |
|
|
|
|
|
### Direct Use |
|
|
|
|
|
- Training and evaluating **visual-language models (VLMs)** or other multimodal models. |
|
|
- Patch-based object detection and reasoning. |
|
|
- Research in **image question answering**, **visual reasoning**, and **multimodal representation learning**. |
|
|
|
|
|
### Out-of-Scope Use |
|
|
|
|
|
- Direct commercial redistribution of original COCO images without following Flickr Terms of Use. |
|
|
- Use cases where original images are required to be displayed in full, due to copyright restrictions. |
|
|
|
|
|
## Dataset Structure |
|
|
|
|
|
- **question**: A textual description of the task referring to a 4x4 patch grid. |
|
|
- **answer**: List of integers representing the patch indices containing the target object(s). |
|
|
- **image**: Corresponding COCO-2017 image (PIL Image object or file path). |
|
|
|
|
|
The dataset contains no explicit splits; users can generate their own train/validation/test splits as needed. |
|
|
|
|
|
## Dataset Creation |
|
|
|
|
|
### Curation Rationale |
|
|
|
|
|
The dataset was created to facilitate **patch-level visual question answering** and to improve the training of visual-language models using real-world images with structured spatial queries. |
|
|
|
|
|
### Source Data |
|
|
|
|
|
The dataset is based on COCO-2017 images and annotations. Bounding boxes from COCO are used to determine which patches contain specific objects (e.g., potted plants). |
|
|
|
|
|
#### Data Collection and Processing |
|
|
|
|
|
- Images are sourced from COCO-2017 (Flickr) respecting their Terms of Use. |
|
|
- Bounding boxes from COCO are used to automatically generate 4x4 grid questions. |
|
|
- Each question asks which patch(es) contain a specific object. |
|
|
- Answers are stored as lists of patch indices. |
|
|
|
|
|
#### Who are the source data producers? |
|
|
|
|
|
Original images were contributed by Flickr users and annotated by the COCO Consortium. |
|
|
|
|
|
### Annotations |
|
|
|
|
|
#### Annotation process |
|
|
|
|
|
Annotations (bounding boxes) are sourced from COCO-2017. Patch assignments and questions were automatically generated programmatically based on bounding box locations. |
|
|
|
|
|
#### Who are the annotators? |
|
|
|
|
|
Annotations are from COCO annotators; patch-level questions are generated automatically. |
|
|
|
|
|
#### Personal and Sensitive Information |
|
|
|
|
|
The dataset does **not contain personal or sensitive information**. |
|
|
|
|
|
## Bias, Risks, and Limitations |
|
|
|
|
|
- Images reflect the distribution and biases present in COCO-2017. |
|
|
- Models trained on this dataset may inherit biases from the original dataset. |
|
|
- Limited to the objects annotated in COCO-2017. |
|
|
|
|
|
### Recommendations |
|
|
|
|
|
Users should be aware of the **copyright limitations of the original images** and provide attribution for COCO annotations. Use transformed or model-generated outputs rather than raw images for publication if possible. |
|
|
|
|
|
## Glossary |
|
|
|
|
|
- **Patch**: One of 16 equally sized blocks in a 4x4 grid over an image. |
|
|
- **VLM (Visual-Language Model)**: A model that learns joint representations of images and text. |
|
|
|
|
|
## Dataset Card Authors |
|
|
|
|
|
Yurii Potapov |
|
|
|
|
|
## Dataset Card Contact |
|
|
|
|
|
yurii.a.potapov@gmail.com |