Datasets:
license: cc-by-4.0
task_categories:
- object-detection
tags:
- scene-graph-generation
- visual-relationship-detection
- visual-genome
- coco-format
language:
- en
pretty_name: IndoorVG — Indoor Visual Genome (COCO format)
size_categories:
- 10K<n<100K
IndoorVG — Indoor Visual Genome (COCO format)
IndoorVG is a curated split of Visual Genome targeting real-world indoor scenarios (kitchens, offices, living rooms, …). It was proposed in Neau et al. (2024) and reformatted here in standard COCO-JSON format.
It was produced as part of the SGG-Benchmark framework and used to train the models described in the REACT paper (Neau et al., BMVC 2025).
The 84 object classes and 37 predicate classes were manually selected and semi-automatically merged to reduce label noise and ambiguity compared to VG150, focusing on indoor-relevant concepts.
Annotation overview
Each image comes with:
- Object bounding boxes — 84 indoor-focused object categories.
- Scene-graph relations — 37 predicate categories connecting pairs of objects as
directed
(subject, predicate, object)triplets.
Four random validation images with bounding boxes (coloured by category) and relation arrows (yellow, labelled with the predicate name).
Dataset statistics
| Split | Images | Object annotations | Relations |
|---|---|---|---|
| train | 9 538 | 125 411 | 72 291 |
| val | 733 | 10 246 | 4 866 |
| test | 4 403 | 61 278 | 29 367 |
Object categories (84)
Manually curated indoor vocabulary: bag, basket, bin, blind, book, bottle, bowl,
cabinet, ceiling, chair, … Full list embedded in dataset_info.description.
Predicate categories (37)
above · against · at · attached to · behind · between · carrying · covering · cutting · drinking · eating · filled with · for · hanging from · has · holding · in · in front of · laying on · looking at · lying on · mounted on · near · of · on · playing with · reading · sitting at · sitting on · standing on · taking · talking on · under · using · watching · wearing · with
Dataset structure
DatasetDict({
train: Dataset({
features: ['image', 'image_id', 'width', 'height', 'file_name',
'objects', 'relations'],
num_rows: 9538
}),
val: Dataset({
features: ['image', 'image_id', 'width', 'height', 'file_name',
'objects', 'relations'],
num_rows: 733
}),
test: Dataset({
features: ['image', 'image_id', 'width', 'height', 'file_name',
'objects', 'relations'],
num_rows: 4403
}),
})
Each row contains:
| Field | Type | Description |
|---|---|---|
image |
Image |
PIL image |
image_id |
int |
Original Visual Genome image id |
width / height |
int |
Image dimensions |
file_name |
str |
Original filename |
objects |
List[dict] |
{id, category_id, bbox (xywh), area, iscrowd, segmentation} |
relations |
List[dict] |
{id, subject_id, object_id, predicate_id} — ids refer to objects[*].id |
Usage
from datasets import load_dataset
import json
ds = load_dataset("maelic/IndoorVG-coco-format")
# Recover label maps from the embedded metadata
meta = json.loads(ds["train"].info.description)
cat_id2name = {c["id"]: c["name"] for c in meta["categories"]}
pred_id2name = {c["id"]: c["name"] for c in meta["rel_categories"]}
sample = ds["train"][0]
image = sample["image"] # PIL Image
for obj in sample["objects"]:
print(cat_id2name[obj["category_id"]], obj["bbox"])
for rel in sample["relations"]:
print(rel["subject_id"], "--", pred_id2name[rel["predicate_id"]], "->", rel["object_id"])
This dataset can be used with the pycocotools API for scene graph generation:
pip install git+https://github.com/Maelic/pycocotools
from pycocootools.coco import COCO
from datasets import load_dataset
ds = load_dataset("maelic/IndoorVG-coco-format")
# Convert Hugging Face dataset to COCO format
coco_ds = {
"images": ds["train"]["image_id"],
"annotations": ds["train"]["objects"],
"rel_annotations": ds["train"]["relations"],
"categories": json.loads(ds["train"].info.description)["categories"],
"rel_categories": json.loads(ds["train"].info.description)["rel_categories"],
}
coco = COCO()
coco.dataset = coco_ds
coco.createIndex()
for img_id in coco.getImgIds():
rel_ids = coco.getRelIds(imgIds=img_id)
relations.extend(coco.loadRels(rel_ids))
Citation
If you use this dataset, please cite the IndoorVG paper:
@incollection{neau2023defense,
title={In defense of scene graph generation for human-robot open-ended interaction in service robotics},
author={Neau, Ma{"e}lic and Santos, Paulo and Bosser, Anne-Gwenn and Buche, C{'e}dric},
booktitle={Robot World Cup},
pages={299--310},
year={2023},
publisher={Springer}
}
And Visual Genome:
@article{krishna2017visual,
title={Visual genome: Connecting language and vision using crowdsourced dense image annotations},
author={Krishna, Ranjay and Zhu, Yuke and Groth, Oliver and Johnson, Justin and Hata, Kenji and Kravitz, Joshua and Chen, Stephanie and Kalantidis, Yannis and Li, Li-Jia and Shamma, David A and others},
journal={International journal of computer vision},
volume={123},
number={1},
pages={32--73},
year={2017},
publisher={Springer}
}
And the REACT paper if you use the SGG-Benchmark models:
@inproceedings{Neau_2025_BMVC,
author = {Ma\"elic Neau and Paulo Eduardo Santos and Anne-Gwenn Bosser
and Akihiro Sugimoto and Cedric Buche},
title = {REACT: Real-time Efficiency and Accuracy Compromise for Tradeoffs
in Scene Graph Generation},
booktitle = {36th British Machine Vision Conference 2025, {BMVC} 2025,
Sheffield, UK, November 24-27, 2025},
publisher = {BMVA},
year = {2025},
url = {https://bmva-archive.org.uk/bmvc/2025/assets/papers/Paper_239/paper.pdf},
}
License
Visual Genome images and annotations are released under the Creative Commons Attribution 4.0 International (CC BY 4.0) license.
