VG150-coco-format / README.md
maelic's picture
Add dataset card
a964891 verified
metadata
license: mit
task_categories:
  - object-detection
tags:
  - scene-graph-generation
  - visual-relationship-detection
  - visual-genome
  - vg150
  - coco-format
language:
  - en
pretty_name: VG150  Visual Genome 150 (COCO format)
size_categories:
  - 100K<n<1M

VG150 — Visual Genome 150 (COCO format)

This dataset is the standard VG150 split of Visual Genome (Krishna et al., 2017), the most widely used benchmark for Scene Graph Generation, reformatted in standard COCO-JSON format. VG150 contains the top 150 object categories and 50 relations from the original Visual Genome dataset, selected by frequency in the Scene Graph Generation by Iterative Message Passing paper.

This version in COCO format was produced as part of the SGG-Benchmark framework and used to train the models described in the REACT paper (Neau et al., BMVC 2025).

⚠️ Bias warning: VG150 has been heavily criticised for high class overlap and annotation biases (e.g. person / man / men / people). See VrR-VG (ICCV'19) and Neau et al. (ICCVW'23) for reference.


Annotation overview

Each image comes with:

  • Object bounding boxes — 150 Visual Genome object categories.
  • Scene-graph relations — 50 predicate categories connecting pairs of objects as directed (subject, predicate, object) triplets.

Annotation example — val split

Four random validation images with bounding boxes (coloured by category) and relation arrows (yellow, labelled with the predicate name).


Dataset statistics

Split Images Object annotations Relations
train 73 538 793 061 439 063
val 4 844 54 415 30 133
test 27 032 297 922 153 509

Object categories (150)

Top-150 Visual Genome object vocabulary used by the standard SGG split. Full list embedded in dataset_info.description.

Predicate categories (50)

and · says · belonging to · over · parked on · growing on · standing on · made of · attached to · at · in · hanging from · wears · in front of · from · for · watching · lying on · to · behind · flying in · looking at · on back of · holding · between · laying on · riding · has · across · wearing · walking on · eating · above · part of · walking in · sitting on · under · covered in · carrying · using · along · with · on · covering · of · against · playing · near · painted on · mounted on


Dataset structure

DatasetDict({
    train: Dataset({
        features: ['image', 'image_id', 'width', 'height', 'file_name',
                   'objects', 'relations'],
        num_rows: 73538
    }),
    val: Dataset({
        features: ['image', 'image_id', 'width', 'height', 'file_name',
                   'objects', 'relations'],
        num_rows: 4844
    }),
    test: Dataset({
        features: ['image', 'image_id', 'width', 'height', 'file_name',
                   'objects', 'relations'],
        num_rows: 27032
    }),
})

Each row contains:

Field Type Description
image Image PIL image
image_id int Original Visual Genome image id
width / height int Image dimensions
file_name str Original filename
objects List[dict] {id, category_id, bbox (xywh), area, iscrowd, segmentation}
relations List[dict] {id, subject_id, object_id, predicate_id} — ids refer to objects[*].id

Usage

from datasets import load_dataset
import json

ds = load_dataset("maelic/VG150-coco-format")

# Recover label maps from the embedded metadata
meta = json.loads(ds["train"].info.description)
cat_id2name  = {c["id"]: c["name"] for c in meta["categories"]}
pred_id2name = {c["id"]: c["name"] for c in meta["rel_categories"]}

sample = ds["train"][0]
image  = sample["image"]          # PIL Image
for obj in sample["objects"]:
    print(cat_id2name[obj["category_id"]], obj["bbox"])
for rel in sample["relations"]:
    print(rel["subject_id"], "--", pred_id2name[rel["predicate_id"]], "->", rel["object_id"])

Citation

If you use this dataset, please cite Visual Genome:

@article{krishna2017visual,
  title={Visual genome: Connecting language and vision using crowdsourced dense image annotations},
  author={Krishna, Ranjay and Zhu, Yuke and Groth, Oliver and Johnson, Justin and Hata, Kenji and Kravitz, Joshua and Chen, Stephanie and Kalantidis, Yannis and Li, Li-Jia and Shamma, David A and others},
  journal={International journal of computer vision},
  volume={123},
  number={1},
  pages={32--73},
  year={2017},
  publisher={Springer}
}

And the original paper that established the VG150 split:

@inproceedings{xu2017scene,
  title={Scene graph generation by iterative message passing},
  author={Xu, Danfei and Zhu, Yuke and Choy, Christopher B and Fei-Fei, Li},
  booktitle={Proceedings of the IEEE conference on computer vision and pattern recognition},
  pages={5410--5419},
  year={2017}
}

And the REACT paper if you use the SGG-Benchmark models:

@inproceedings{Neau_2025_BMVC,
  author    = {Ma\"elic Neau and Paulo Eduardo Santos and Anne-Gwenn Bosser
               and Akihiro Sugimoto and Cedric Buche},
  title     = {REACT: Real-time Efficiency and Accuracy Compromise for Tradeoffs
               in Scene Graph Generation},
  booktitle = {36th British Machine Vision Conference 2025, {BMVC} 2025,
               Sheffield, UK, November 24-27, 2025},
  publisher = {BMVA},
  year      = {2025},
  url       = {https://bmva-archive.org.uk/bmvc/2025/assets/papers/Paper_239/paper.pdf},
}

License

Visual Genome images and annotations are released under the Creative Commons Attribution 4.0 International (CC BY 4.0) license.