--- license: cc-by-4.0 task_categories: - object-detection tags: - scene-graph-generation - visual-relationship-detection - visual-genome - coco-format language: - en pretty_name: IndoorVG — Indoor Visual Genome (COCO format) size_categories: - 10K above · against · at · attached to · behind · between · carrying · covering · > cutting · drinking · eating · filled with · for · hanging from · has · holding · > in · in front of · laying on · looking at · lying on · mounted on · near · of · > on · playing with · reading · sitting at · sitting on · standing on · taking · > talking on · under · using · watching · wearing · with --- ## Dataset structure ```python DatasetDict({ train: Dataset({ features: ['image', 'image_id', 'width', 'height', 'file_name', 'objects', 'relations'], num_rows: 9538 }), val: Dataset({ features: ['image', 'image_id', 'width', 'height', 'file_name', 'objects', 'relations'], num_rows: 733 }), test: Dataset({ features: ['image', 'image_id', 'width', 'height', 'file_name', 'objects', 'relations'], num_rows: 4403 }), }) ``` Each row contains: | Field | Type | Description | |-------|------|-------------| | `image` | `Image` | PIL image | | `image_id` | `int` | Original Visual Genome image id | | `width` / `height` | `int` | Image dimensions | | `file_name` | `str` | Original filename | | `objects` | `List[dict]` | `{id, category_id, bbox (xywh), area, iscrowd, segmentation}` | | `relations` | `List[dict]` | `{id, subject_id, object_id, predicate_id}` — ids refer to `objects[*].id` | --- ## Usage ```python from datasets import load_dataset import json ds = load_dataset("maelic/IndoorVG-coco-format") # Recover label maps from the embedded metadata meta = json.loads(ds["train"].info.description) cat_id2name = {c["id"]: c["name"] for c in meta["categories"]} pred_id2name = {c["id"]: c["name"] for c in meta["rel_categories"]} sample = ds["train"][0] image = sample["image"] # PIL Image for obj in sample["objects"]: print(cat_id2name[obj["category_id"]], obj["bbox"]) for rel in sample["relations"]: print(rel["subject_id"], "--", pred_id2name[rel["predicate_id"]], "->", rel["object_id"]) ``` This dataset can be used with the pycocotools API for scene graph generation: ```bash pip install git+https://github.com/Maelic/pycocotools ``` ```python from pycocootools.coco import COCO from datasets import load_dataset ds = load_dataset("maelic/IndoorVG-coco-format") # Convert Hugging Face dataset to COCO format coco_ds = { "images": ds["train"]["image_id"], "annotations": ds["train"]["objects"], "rel_annotations": ds["train"]["relations"], "categories": json.loads(ds["train"].info.description)["categories"], "rel_categories": json.loads(ds["train"].info.description)["rel_categories"], } coco = COCO() coco.dataset = coco_ds coco.createIndex() for img_id in coco.getImgIds(): rel_ids = coco.getRelIds(imgIds=img_id) relations.extend(coco.loadRels(rel_ids)) ``` --- ## Citation If you use this dataset, please cite the IndoorVG paper: ```bibtex @incollection{neau2023defense, title={In defense of scene graph generation for human-robot open-ended interaction in service robotics}, author={Neau, Ma{"e}lic and Santos, Paulo and Bosser, Anne-Gwenn and Buche, C{'e}dric}, booktitle={Robot World Cup}, pages={299--310}, year={2023}, publisher={Springer} } ``` And Visual Genome: ```bibtex @article{krishna2017visual, title={Visual genome: Connecting language and vision using crowdsourced dense image annotations}, author={Krishna, Ranjay and Zhu, Yuke and Groth, Oliver and Johnson, Justin and Hata, Kenji and Kravitz, Joshua and Chen, Stephanie and Kalantidis, Yannis and Li, Li-Jia and Shamma, David A and others}, journal={International journal of computer vision}, volume={123}, number={1}, pages={32--73}, year={2017}, publisher={Springer} } ``` And the REACT paper if you use the SGG-Benchmark models: ```bibtex @inproceedings{Neau_2025_BMVC, author = {Ma\"elic Neau and Paulo Eduardo Santos and Anne-Gwenn Bosser and Akihiro Sugimoto and Cedric Buche}, title = {REACT: Real-time Efficiency and Accuracy Compromise for Tradeoffs in Scene Graph Generation}, booktitle = {36th British Machine Vision Conference 2025, {BMVC} 2025, Sheffield, UK, November 24-27, 2025}, publisher = {BMVA}, year = {2025}, url = {https://bmva-archive.org.uk/bmvc/2025/assets/papers/Paper_239/paper.pdf}, } ``` --- ## License Visual Genome images and annotations are released under the [Creative Commons Attribution 4.0 International (CC BY 4.0)](https://creativecommons.org/licenses/by/4.0/) license.