File size: 4,668 Bytes
ef653eb 40c36b1 ef653eb a5ca571 3f99af6 ef653eb a5ca571 ccb89c4 d89f333 11272db 3dcdcc1 11272db d89f333 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 |
---
dataset_info:
features:
- name: image
dtype: image
- name: question
dtype: string
- name: answer
sequence: int64
splits:
- name: valid
num_bytes: 7094843893.125
num_examples: 14631
- name: train
num_bytes: 140854221157.57
num_examples: 289911
download_size: 51389456693
dataset_size: 147949065050.695
configs:
- config_name: default
data_files:
- split: valid
path: data/valid-*
- split: train
path: data/train-*
license: cc-by-4.0
language:
- en
pretty_name: QA Patches Task Dataset
task_categories:
- image-text-to-text
- visual-question-answering
---
# Dataset Card for Patch-Based Visual Question Answering Dataset
## Dataset Details
### Dataset Description
This dataset contains approximately 305,000 triplets of `question`, `answer`, and `image` designed for patch-based visual reasoning tasks.
A standard question in this dataset is formatted as follows:
> Image Grid: The image is divided into a 4x4 grid of 16 equal-sized patches. Patches are numbered sequentially from the top-left corner and moving right, then down to the next row.
> Task: Identify the patch number(s) that contain a potted plant.
> Response Format: Provide only the relevant patch number(s) as a list (e.g., [3], [5, 12], or [] if none are found).
The dataset is built on top of **COCO-2017**, from which object bounding boxes (bboxes) are used to generate questions and answers.
- **Curated by:** Yurii Potapov
- **Language(s) :** English
- **License:** Annotations and code: CC BY 4.0 (COCO), Images: Flickr Terms of Use
### Dataset Sources
- **Repository:** [Not yet published]
- **Paper:** [Not yet published]
- **Demo:** [More Information Needed]
## Uses
### Direct Use
- Training and evaluating **visual-language models (VLMs)** or other multimodal models.
- Patch-based object detection and reasoning.
- Research in **image question answering**, **visual reasoning**, and **multimodal representation learning**.
### Out-of-Scope Use
- Direct commercial redistribution of original COCO images without following Flickr Terms of Use.
- Use cases where original images are required to be displayed in full, due to copyright restrictions.
## Dataset Structure
- **question**: A textual description of the task referring to a 4x4 patch grid.
- **answer**: List of integers representing the patch indices containing the target object(s).
- **image**: Corresponding COCO-2017 image (PIL Image object or file path).
The dataset contains no explicit splits; users can generate their own train/validation/test splits as needed.
## Dataset Creation
### Curation Rationale
The dataset was created to facilitate **patch-level visual question answering** and to improve the training of visual-language models using real-world images with structured spatial queries.
### Source Data
The dataset is based on COCO-2017 images and annotations. Bounding boxes from COCO are used to determine which patches contain specific objects (e.g., potted plants).
#### Data Collection and Processing
- Images are sourced from COCO-2017 (Flickr) respecting their Terms of Use.
- Bounding boxes from COCO are used to automatically generate 4x4 grid questions.
- Each question asks which patch(es) contain a specific object.
- Answers are stored as lists of patch indices.
#### Who are the source data producers?
Original images were contributed by Flickr users and annotated by the COCO Consortium.
### Annotations
#### Annotation process
Annotations (bounding boxes) are sourced from COCO-2017. Patch assignments and questions were automatically generated programmatically based on bounding box locations.
#### Who are the annotators?
Annotations are from COCO annotators; patch-level questions are generated automatically.
#### Personal and Sensitive Information
The dataset does **not contain personal or sensitive information**.
## Bias, Risks, and Limitations
- Images reflect the distribution and biases present in COCO-2017.
- Models trained on this dataset may inherit biases from the original dataset.
- Limited to the objects annotated in COCO-2017.
### Recommendations
Users should be aware of the **copyright limitations of the original images** and provide attribution for COCO annotations. Use transformed or model-generated outputs rather than raw images for publication if possible.
## Glossary
- **Patch**: One of 16 equally sized blocks in a 4x4 grid over an image.
- **VLM (Visual-Language Model)**: A model that learns joint representations of images and text.
## Dataset Card Authors
Yurii Potapov
## Dataset Card Contact
yurii.a.potapov@gmail.com |