Datasets:
File size: 6,744 Bytes
9e7b0cf fc85051 9e7b0cf fc85051 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 | ---
license: cc-by-4.0
task_categories:
- object-detection
tags:
- scene-graph-generation
- visual-relationship-detection
- visual-genome
- coco-format
language:
- en
pretty_name: IndoorVG — Indoor Visual Genome (COCO format)
size_categories:
- 10K<n<100K
---
# IndoorVG — Indoor Visual Genome (COCO format)
**IndoorVG** is a curated split of
[Visual Genome](https://homes.cs.washington.edu/~ranjay/visualgenome/index.html)
targeting real-world **indoor** scenarios (kitchens, offices, living rooms, …).
It was proposed in
[Neau et al. (2024)](https://link.springer.com/chapter/10.1007/978-3-031-55015-7_25)
and reformatted here in standard COCO-JSON format.
It was produced as part of the
[SGG-Benchmark](https://github.com/Maelic/SGG-Benchmark) framework and used to train
the models described in the **REACT** paper
([Neau et al., BMVC 2025](https://bmva-archive.org.uk/bmvc/2025/assets/papers/Paper_239/paper.pdf)).
The 84 object classes and 37 predicate classes were **manually selected and
semi-automatically merged** to reduce label noise and ambiguity compared to VG150,
focusing on indoor-relevant concepts.
---
## Annotation overview
Each image comes with:
- **Object bounding boxes** — 84 indoor-focused object categories.
- **Scene-graph relations** — 37 predicate categories connecting pairs of objects as
directed `(subject, predicate, object)` triplets.

*Four random validation images with bounding boxes (coloured by category) and
relation arrows (yellow, labelled with the predicate name).*
---
## Dataset statistics
| Split | Images | Object annotations | Relations |
|-------|-------:|-------------------:|----------:|
| train | 9 538 | 125 411 | 72 291 |
| val | 733 | 10 246 | 4 866 |
| test | 4 403 | 61 278 | 29 367 |
---
## Object categories (84)
Manually curated indoor vocabulary: *bag, basket, bin, blind, book, bottle, bowl,
cabinet, ceiling, chair, …* Full list embedded in `dataset_info.description`.
## Predicate categories (37)
> above · against · at · attached to · behind · between · carrying · covering ·
> cutting · drinking · eating · filled with · for · hanging from · has · holding ·
> in · in front of · laying on · looking at · lying on · mounted on · near · of ·
> on · playing with · reading · sitting at · sitting on · standing on · taking ·
> talking on · under · using · watching · wearing · with
---
## Dataset structure
```python
DatasetDict({
train: Dataset({
features: ['image', 'image_id', 'width', 'height', 'file_name',
'objects', 'relations'],
num_rows: 9538
}),
val: Dataset({
features: ['image', 'image_id', 'width', 'height', 'file_name',
'objects', 'relations'],
num_rows: 733
}),
test: Dataset({
features: ['image', 'image_id', 'width', 'height', 'file_name',
'objects', 'relations'],
num_rows: 4403
}),
})
```
Each row contains:
| Field | Type | Description |
|-------|------|-------------|
| `image` | `Image` | PIL image |
| `image_id` | `int` | Original Visual Genome image id |
| `width` / `height` | `int` | Image dimensions |
| `file_name` | `str` | Original filename |
| `objects` | `List[dict]` | `{id, category_id, bbox (xywh), area, iscrowd, segmentation}` |
| `relations` | `List[dict]` | `{id, subject_id, object_id, predicate_id}` — ids refer to `objects[*].id` |
---
## Usage
```python
from datasets import load_dataset
import json
ds = load_dataset("maelic/IndoorVG-coco-format")
# Recover label maps from the embedded metadata
meta = json.loads(ds["train"].info.description)
cat_id2name = {c["id"]: c["name"] for c in meta["categories"]}
pred_id2name = {c["id"]: c["name"] for c in meta["rel_categories"]}
sample = ds["train"][0]
image = sample["image"] # PIL Image
for obj in sample["objects"]:
print(cat_id2name[obj["category_id"]], obj["bbox"])
for rel in sample["relations"]:
print(rel["subject_id"], "--", pred_id2name[rel["predicate_id"]], "->", rel["object_id"])
```
This dataset can be used with the pycocotools API for scene graph generation:
```bash
pip install git+https://github.com/Maelic/pycocotools
```
```python
from pycocootools.coco import COCO
from datasets import load_dataset
ds = load_dataset("maelic/IndoorVG-coco-format")
# Convert Hugging Face dataset to COCO format
coco_ds = {
"images": ds["train"]["image_id"],
"annotations": ds["train"]["objects"],
"rel_annotations": ds["train"]["relations"],
"categories": json.loads(ds["train"].info.description)["categories"],
"rel_categories": json.loads(ds["train"].info.description)["rel_categories"],
}
coco = COCO()
coco.dataset = coco_ds
coco.createIndex()
for img_id in coco.getImgIds():
rel_ids = coco.getRelIds(imgIds=img_id)
relations.extend(coco.loadRels(rel_ids))
```
---
## Citation
If you use this dataset, please cite the IndoorVG paper:
```bibtex
@incollection{neau2023defense,
title={In defense of scene graph generation for human-robot open-ended interaction in service robotics},
author={Neau, Ma{"e}lic and Santos, Paulo and Bosser, Anne-Gwenn and Buche, C{'e}dric},
booktitle={Robot World Cup},
pages={299--310},
year={2023},
publisher={Springer}
}
```
And Visual Genome:
```bibtex
@article{krishna2017visual,
title={Visual genome: Connecting language and vision using crowdsourced dense image annotations},
author={Krishna, Ranjay and Zhu, Yuke and Groth, Oliver and Johnson, Justin and Hata, Kenji and Kravitz, Joshua and Chen, Stephanie and Kalantidis, Yannis and Li, Li-Jia and Shamma, David A and others},
journal={International journal of computer vision},
volume={123},
number={1},
pages={32--73},
year={2017},
publisher={Springer}
}
```
And the REACT paper if you use the SGG-Benchmark models:
```bibtex
@inproceedings{Neau_2025_BMVC,
author = {Ma\"elic Neau and Paulo Eduardo Santos and Anne-Gwenn Bosser
and Akihiro Sugimoto and Cedric Buche},
title = {REACT: Real-time Efficiency and Accuracy Compromise for Tradeoffs
in Scene Graph Generation},
booktitle = {36th British Machine Vision Conference 2025, {BMVC} 2025,
Sheffield, UK, November 24-27, 2025},
publisher = {BMVA},
year = {2025},
url = {https://bmva-archive.org.uk/bmvc/2025/assets/papers/Paper_239/paper.pdf},
}
```
---
## License
Visual Genome images and annotations are released under the
[Creative Commons Attribution 4.0 International (CC BY 4.0)](https://creativecommons.org/licenses/by/4.0/)
license.
|