Datasets:
Modalities:
Text
Formats:
json
Languages:
English
Size:
< 1K
Tags:
text-to-image
multimodal
indoor-scenes
prompt-engineering
stable-diffusion
scene-understanding
DOI:
License:
update readme
Browse files
README.md
CHANGED
|
@@ -32,4 +32,110 @@ tags:
|
|
| 32 |
- prompt-dataset
|
| 33 |
size_categories:
|
| 34 |
- 10K<n<100K
|
| 35 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 32 |
- prompt-dataset
|
| 33 |
size_categories:
|
| 34 |
- 10K<n<100K
|
| 35 |
+
---
|
| 36 |
+
# Dataset Card for Prompt2SceneBench
|
| 37 |
+
|
| 38 |
+
## Dataset Details
|
| 39 |
+
|
| 40 |
+
### Dataset Description
|
| 41 |
+
|
| 42 |
+
**Prompt2SceneBench** is a structured prompt dataset with 12,606 text descriptions designed for evaluating text-to-image models in realistic indoor environments.
|
| 43 |
+
Each prompt describes the spatial arrangement of 1–4 common household objects on compatible surfaces and in contextually appropriate scenes, sampled using strict object–surface–scene compatibility mappings.
|
| 44 |
+
|
| 45 |
+
The dataset is organized into four types:
|
| 46 |
+
- **Type A**: 1 object
|
| 47 |
+
- **Type B**: 2 objects
|
| 48 |
+
- **Type C**: 3 objects
|
| 49 |
+
- **Type D**: 4 objects
|
| 50 |
+
|
| 51 |
+
- **Curated by:** Bodhisatta Maiti
|
| 52 |
+
- **Funded by:** N/A
|
| 53 |
+
- **Shared by:** Bodhisatta Maiti
|
| 54 |
+
- **Language(s):** English
|
| 55 |
+
- **License:** CC BY 4.0
|
| 56 |
+
|
| 57 |
+
## Uses
|
| 58 |
+
|
| 59 |
+
### Direct Use
|
| 60 |
+
|
| 61 |
+
Prompt2SceneBench can be directly used for:
|
| 62 |
+
|
| 63 |
+
1. **Prompt-to-image generation** using models like Stable Diffusion XL to benchmark compositional accuracy in indoor scenes.
|
| 64 |
+
2. **Prompt–image alignment scoring**, evaluating how well generated images match the structured prompts.
|
| 65 |
+
3. **Compositional generalization benchmarking**, testing models on spatial arrangement of 1–4 objects with increasing difficulty.
|
| 66 |
+
4. **Zero-shot captioning evaluation**, using prompts as pseudo-references to measure how captioning models describe generated images.
|
| 67 |
+
5. **Scene layout reasoning tasks**, e.g., predicting spatial configuration or scene graph generation from textual prompts.
|
| 68 |
+
6. **Style transfer or image editing tasks**,where the structured prompt can guide object placement or scene modification in indoor contexts.
|
| 69 |
+
7. **Multimodal fine-tuning or distillation**, where paired structured prompts and generated images can be used to improve alignment in vision-language models (VLMs), especially for grounding objects, spatial relationships, and indoor scene context.
|
| 70 |
+
8. **Controllable generation studies**, analyzing prompt structure impact on generated outputs under different text-to-image models.
|
| 71 |
+
|
| 72 |
+
### Out-of-Scope Use
|
| 73 |
+
|
| 74 |
+
- Outdoor scenes, surreal or abstract visual compositions.
|
| 75 |
+
- Benchmarks involving human-centric understanding or motion.
|
| 76 |
+
- Direct use for safety-critical or clinical systems.
|
| 77 |
+
|
| 78 |
+
## Dataset Structure
|
| 79 |
+
|
| 80 |
+
- **Fields:**
|
| 81 |
+
- `type`: Prompt category (A/B/C/D)
|
| 82 |
+
- `object1`, `object2`, `object3`, `object4` (if present)
|
| 83 |
+
- `surface`
|
| 84 |
+
- `scene`
|
| 85 |
+
- `prompt`: The full natural language text
|
| 86 |
+
|
| 87 |
+
- **Format:** CSV
|
| 88 |
+
- **Size:** 12,606 prompts
|
| 89 |
+
|
| 90 |
+
|
| 91 |
+
## Dataset Creation
|
| 92 |
+
|
| 93 |
+
### Curation Rationale
|
| 94 |
+
|
| 95 |
+
The dataset was created to provide a controlled and structured benchmark for evaluating spatial and compositional understanding in generative AI systems, particularly in indoor environments.
|
| 96 |
+
|
| 97 |
+
### Source Data
|
| 98 |
+
|
| 99 |
+
#### Data Collection and Processing
|
| 100 |
+
|
| 101 |
+
All data is programmatically generated using a controlled sampling routine from curated lists of 50 indoor objects, 20 surfaces, and 15 scenes. Only valid object–surface–scene combinations were retained using rule-based mappings.
|
| 102 |
+
|
| 103 |
+
#### Who are the source data producers?
|
| 104 |
+
|
| 105 |
+
The dataset is fully synthetic and was created by Bodhisatta Maiti through controlled generation logic.
|
| 106 |
+
|
| 107 |
+
### Annotations
|
| 108 |
+
|
| 109 |
+
No human annotations are involved beyond the original curation and sampling logic.
|
| 110 |
+
|
| 111 |
+
#### Personal and Sensitive Information
|
| 112 |
+
|
| 113 |
+
No personal or sensitive information is present. The dataset consists of entirely synthetic prompts.
|
| 114 |
+
|
| 115 |
+
## Bias, Risks, and Limitations
|
| 116 |
+
|
| 117 |
+
This dataset focuses only on physically and contextually plausible indoor scenes. It excludes unusual, humorous, or surrealistic scenarios intentionally. It may not cover the full range of compositional variation needed in creative applications.
|
| 118 |
+
|
| 119 |
+
### Recommendations
|
| 120 |
+
|
| 121 |
+
Use with generative models that understand object placement and spatial grounding. Avoid using it to benchmark models trained for outdoor or abstract scenes.
|
| 122 |
+
|
| 123 |
+
## Citation
|
| 124 |
+
|
| 125 |
+
**APA:**
|
| 126 |
+
|
| 127 |
+
Maiti, B. (2025). Prompt2SceneBench: Structured Prompts for Text-to-Image Generation in Indoor Environments [Data set]. Zenodo. https://doi.org/10.5281/zenodo.15876129
|
| 128 |
+
|
| 129 |
+
## Glossary
|
| 130 |
+
|
| 131 |
+
- **Prompt category**: Defines the number of objects (1 to 4) described in the scene.
|
| 132 |
+
- **Surface**: Physical platform or area where objects rest.
|
| 133 |
+
- **Scene**: Room or environment in which the surface is situated.
|
| 134 |
+
|
| 135 |
+
## Dataset Card Authors
|
| 136 |
+
|
| 137 |
+
- Bodhisatta Maiti
|
| 138 |
+
|
| 139 |
+
## Dataset Card Contact
|
| 140 |
+
|
| 141 |
+
- bodhisatta.iitbhu@gmail.com
|