File size: 7,592 Bytes
02e6458
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a0297a5
02e6458
 
609912d
 
 
 
 
 
 
 
 
 
187b94c
 
 
609912d
 
 
 
 
 
99edc17
 
 
 
 
 
 
609912d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3d39410
82232da
3d39410
82232da
3d39410
 
 
3be6cec
3d39410
 
 
 
 
3be6cec
 
 
36a7653
3d39410
 
6fed0d5
3d39410
 
 
 
 
 
 
 
 
 
 
 
 
 
36a7653
3d39410
 
 
609912d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
36a7653
609912d
 
 
 
 
 
 
 
 
a0297a5
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
---
license: cc-by-4.0
task_categories:
- text-to-image
- question-answering
- zero-shot-classification
- image-to-text
language:
- en
tags:
- text-to-image
- multimodal
- indoor-scenes
- prompt-engineering
- stable-diffusion
- scene-understanding
- image-generation
- image-retrieval
- image-captioning
- zero-shot-learning
- contrastive-learning
- semantic-alignment
- benchmarking
- evaluation
- data-generation
- synthetic-data
- structured-prompts
- vision-language
- indoor-environments
- object-grounding
- caption-alignment
- prompt-dataset
- art
size_categories:
- 10K<n<100K
---
# Dataset Card for Prompt2SceneBench

## Dataset Details

### Dataset Description

**Prompt2SceneBench** is a structured prompt dataset with 12,606 text descriptions designed for evaluating text-to-image models in realistic indoor environments. 
Each prompt describes the spatial arrangement of 1–4 common household objects on compatible surfaces and in contextually appropriate scenes, sampled using strict object–surface–scene compatibility mappings.

A usecase of the Prompt2SceneBench has been showcased in the **Prompt2SceneGallery** image dataset (https://huggingface.co/datasets/bodhisattamaiti/Prompt2SceneGallery) which has been generated using SDXL 
and the prompts from Prompt2SceneBench dataset.

- **Curated by:** Bodhisatta Maiti  
- **Funded by:** N/A  
- **Shared by:** Bodhisatta Maiti  
- **Language(s):** English  
- **License:** CC BY 4.0

### Dataset Sources

* **Repository:** 
* https://doi.org/10.5281/zenodo.15876129
* https://www.kaggle.com/datasets/bodhisattamaiti/prompt2scenebench
* https://huggingface.co/datasets/bodhisattamaiti/Prompt2SceneBench

## Uses

### Direct Use

Prompt2SceneBench can be directly used for:

1. **Prompt-to-image generation** using models like Stable Diffusion XL to benchmark compositional accuracy in indoor scenes.
2. **Prompt–image alignment scoring**, evaluating how well generated images match the structured prompts.
3. **Compositional generalization benchmarking**, testing models on spatial arrangement of 1–4 objects with increasing difficulty.
4. **Zero-shot captioning evaluation**, using prompts as pseudo-references to measure how captioning models describe generated images.
5. **Scene layout reasoning tasks**, e.g., predicting spatial configuration or scene graph generation from textual prompts.
6. **Style transfer or image editing tasks**,where the structured prompt can guide object placement or scene modification in indoor contexts.
7. **Multimodal fine-tuning or distillation**, where paired structured prompts and generated images can be used to improve alignment in vision-language models (VLMs), especially for grounding objects, spatial relationships, and indoor scene context.
8. **Controllable generation studies**, analyzing prompt structure impact on generated outputs under different text-to-image models.

### Out-of-Scope Use

- Outdoor scenes, surreal or abstract visual compositions.
- Benchmarks involving human-centric understanding or motion.
- Direct use for safety-critical or clinical systems.

## Dataset Structure

### CSV Format (`prompt2scene_prompts_final.csv`)

**Size:** 12,606 prompts

Each row in the CSV corresponds to a single prompt instance and includes the following fields:

- `type`: Prompt category — one of `A`, `B`, `C`, or `D`, based on number of objects and complexity.
- `object1`, `object2`, `object3`, `object4`: Objects involved in the scene (some may be `None/NaN/Null` depending on type).
- `surface`: The surface where the objects are placed (e.g., `desk surface`, `bench`).
- `scene`: The indoor environment (e.g., `living room`, `study room`).
- `prompt`: The final structured natural language prompt.

Note: 
- Type A prompt has only 1 object (object2, object3, object4 fields will be None/NaN/Null)
- Type B prompt has only 2 objects (object3, object4 fields will be None/NaN/Null)
- Type C prompt has only 3 objects (object4 field will be None/NaN/Null)
- Type D prompt has 4 objects (all the object fields will have values)

Sample Examples:
- Type A: a football located on a bench in a basement. (object1: football, surface: bench, scene: basement)
- Type B: a coffee mug beside a notebook on a wooden table in a home office. (object1: coffee mug, object2: notebook, surface: wooden table, scene: home office)
- Type C: a jar, a coffee mug, and a bowl placed on a kitchen island in a kitchen. (object1: jar, object2: coffee mug, object3: bowl, surface: kitchen island, scene: kitchen)
- Type D: An arrangement of an air purifier, a pair of slippers, a guitar, and a pair of shoes on a floor in a bedroom. (object1:air purifier, object2: pair of slippers, object3: guitar, object4: pair of shoes, surface: floor, scene: bedroom)

### JSON Format (`prompt2scene_metadata.json`)

The JSON contains the following keys:

- `objects`: List of all 50 objects used in the prompt generation.
- `scenes`: List of 15 indoor scenes.
- `surfaces`: List of 20 compatible surfaces.
- `object_to_scenes`: Dictionary mapping each object to plausible indoor scenes.
- `object_to_surfaces`: Dictionary mapping each object to compatible surface(s).
- `surface_to_scenes`: Dictionary mapping each surface to scene(s) where it naturally occurs.
- `prompt_templates`: Template used for generating the prompts for all the prompt types (A, B, C and D), each prompt type has 3 variants

This JSON file supports reproducibility and reuse by providing all internal mappings used during structured prompt generation.
The community can further extend/modify the above lists and mappings and use their own prompt templates based on the usecase.

## Dataset Creation

### Curation Rationale

The dataset was created to provide a controlled and structured benchmark for evaluating spatial and compositional understanding in generative AI systems, particularly in indoor environments.

### Source Data

#### Data Collection and Processing

All data is programmatically generated using a controlled sampling routine from curated lists of 50 indoor objects, 20 surfaces, and 15 scenes. Only valid object–surface–scene combinations were retained using rule-based mappings.

#### Who are the source data producers?

The dataset is fully synthetic and was created by Bodhisatta Maiti through controlled generation logic.

### Annotations

No human annotations are involved beyond the original curation and sampling logic.

#### Personal and Sensitive Information

No personal or sensitive information is present. The dataset consists of entirely synthetic prompts.

## Bias, Risks, and Limitations

This dataset focuses only on physically and contextually plausible indoor scenes. It excludes unusual, humorous, or surrealistic scenarios intentionally. It may not cover the full range of compositional variation needed in creative applications.

### Recommendations

Use with generative models that understand object placement and spatial grounding. Avoid using it to benchmark models trained for outdoor or abstract scenes.

## Citation

**APA:**

Maiti, B. (2025). Prompt2SceneBench: Structured Prompts for Text-to-Image Generation in Indoor Environments [Data set]. Zenodo. https://doi.org/10.5281/zenodo.15876129

## Glossary

- **Type (Prompt category)**:  The number of objects (1 to 4) described in the scene vary based on the prompt type (A, B, C and D).
- **Surface**: Physical platform or area where objects rest.
- **Scene**: Room or environment in which the surface is situated.

## Dataset Card Authors

- Bodhisatta Maiti

## Dataset Card Contact

- bodhisatta.iitbhu@gmail.com