PSG-coco-format / README.md
maelic's picture
Add dataset card
6f22a43 verified
metadata
license: mit
task_categories:
  - object-detection
tags:
  - scene-graph
  - visual-relationship-detection
  - panoptic-scene-graph
  - coco-format
language:
  - en
pretty_name: PSG  Panoptic Scene Graph (COCO format)
size_categories:
  - 10K<n<100K

PSG — Panoptic Scene Graph (COCO format)

This dataset is a reformatted version of the Panoptic Scene Graph (PSG) benchmark (Yang et al., NeurIPS 2022) in standard COCO-JSON format, ready for use with object detection and scene graph generation pipelines.

It was produced as part of the SGG-Benchmark framework and used to train the models described in the REACT paper (Neau et al., BMVC 2025).

/!\ Disclaimer: this dataset does NOT contain original segmentation masks, but only bounding boxes and category labels. Thus this is NOT a panoptic dataset, but rather a scene graph dataset that can only be used to train bounding-box-based SGG models. The original PSG dataset can be downloaded from the PSG project page.


Annotation overview

Each image comes with:

  • Object bounding boxes — 133 COCO object categories.
  • Scene-graph relations — 56 predicate categories connecting pairs of objects as directed (subject, predicate, object) triplets.

Annotation example — val split

Four random validation images with bounding boxes (coloured by category) and relation arrows (yellow, labelled with the predicate name).


Dataset statistics

Split Images Object annotations Relations
train 45 564 494 213 254 214
val 1 000 19 039 7 458
test 2 186 24 910 13 705

Object categories (133)

Standard 133-class COCO panoptic vocabulary: person, bicycle, car, motorcycle, airplane, bus, train, truck, boat, traffic light, … (full list embedded in dataset_info.description).

Predicate categories (56)

over · in front of · beside · on · in · attached to · hanging from · on back of · falling off · going down · painted on · walking on · running on · crossing · standing on · lying on · sitting on · flying over · jumping over · jumping from · wearing · holding · carrying · looking at · guiding · kissing · eating · drinking · feeding · biting · catching · picking · playing with · chasing · climbing · cleaning · playing · touching · pushing · pulling · opening · cooking · talking to · throwing · slicing · driving · riding · parked on · driving on · about to hit · kicking · swinging · entering · exiting · enclosing · leaning on


Dataset structure

DatasetDict({
    train: Dataset({
        features: ['image', 'image_id', 'width', 'height', 'file_name',
                   'objects', 'relations'],
        num_rows: 45564
    }),
    val: Dataset({
        features: ['image', 'image_id', 'width', 'height', 'file_name',
                   'objects', 'relations'],
        num_rows: 1000
    }),
    test: Dataset({
        features: ['image', 'image_id', 'width', 'height', 'file_name',
                   'objects', 'relations'],
        num_rows: 2186
    }),
})

Each row contains:

Field Type Description
image Image PIL image
image_id int Original COCO image id
width / height int Image dimensions
file_name str Original filename
objects List[dict] {id, category_id, bbox (xywh), area, iscrowd, segmentation}
relations List[dict] {id, subject_id, object_id, predicate_id} — ids refer to objects[*].id

Usage

from datasets import load_dataset
import json

ds = load_dataset("maelic/PSG-coco-format")

# Recover label maps from the embedded metadata
meta = json.loads(ds["train"].info.description)
cat_id2name  = {c["id"]: c["name"] for c in meta["categories"]}
pred_id2name = {c["id"]: c["name"] for c in meta["rel_categories"]}

sample = ds["train"][0]
image  = sample["image"]          # PIL Image
for obj in sample["objects"]:
    print(cat_id2name[obj["category_id"]], obj["bbox"])
for rel in sample["relations"]:
    print(rel["subject_id"], "--", pred_id2name[rel["predicate_id"]], "->", rel["object_id"])

Citation

If you use this dataset, please cite the original PSG paper:

@inproceedings{yang2022panoptic,
  title        = {Panoptic scene graph generation},
  author       = {Yang, Jingkang and Ang, Yi Zhe and Guo, Zujin and Zhou, Kaiyang
                  and Zhang, Wayne and Liu, Ziwei},
  booktitle    = {European conference on computer vision},
  pages        = {178--196},
  year         = {2022},
  organization = {Springer},
}

And the REACT paper if you use the SGG-Benchmark models:

@inproceedings{Neau_2025_BMVC,
  author    = {Ma\"elic Neau and Paulo Eduardo Santos and Anne-Gwenn Bosser
               and Akihiro Sugimoto and Cedric Buche},
  title     = {REACT: Real-time Efficiency and Accuracy Compromise for Tradeoffs
               in Scene Graph Generation},
  booktitle = {36th British Machine Vision Conference 2025, {BMVC} 2025,
               Sheffield, UK, November 24-27, 2025},
  publisher = {BMVA},
  year      = {2025},
  url       = {https://bmva-archive.org.uk/bmvc/2025/assets/papers/Paper_239/paper.pdf},
}

License

This dataset inherits the MIT license of the original PSG benchmark. See the MIT License for details.