--- license: mit task_categories: - object-detection tags: - scene-graph - visual-relationship-detection - panoptic-scene-graph - coco-format language: - en pretty_name: PSG — Panoptic Scene Graph (COCO format) size_categories: - 10K over · in front of · beside · on · in · attached to · hanging from · on back of · > falling off · going down · painted on · walking on · running on · crossing · > standing on · lying on · sitting on · flying over · jumping over · jumping from · > wearing · holding · carrying · looking at · guiding · kissing · eating · drinking · > feeding · biting · catching · picking · playing with · chasing · climbing · > cleaning · playing · touching · pushing · pulling · opening · cooking · talking to · > throwing · slicing · driving · riding · parked on · driving on · about to hit · > kicking · swinging · entering · exiting · enclosing · leaning on --- ## Dataset structure ```python DatasetDict({ train: Dataset({ features: ['image', 'image_id', 'width', 'height', 'file_name', 'objects', 'relations'], num_rows: 45564 }), val: Dataset({ features: ['image', 'image_id', 'width', 'height', 'file_name', 'objects', 'relations'], num_rows: 1000 }), test: Dataset({ features: ['image', 'image_id', 'width', 'height', 'file_name', 'objects', 'relations'], num_rows: 2186 }), }) ``` Each row contains: | Field | Type | Description | |-------|------|-------------| | `image` | `Image` | PIL image | | `image_id` | `int` | Original COCO image id | | `width` / `height` | `int` | Image dimensions | | `file_name` | `str` | Original filename | | `objects` | `List[dict]` | `{id, category_id, bbox (xywh), area, iscrowd, segmentation}` | | `relations` | `List[dict]` | `{id, subject_id, object_id, predicate_id}` — ids refer to `objects[*].id` | --- ## Usage ```python from datasets import load_dataset import json ds = load_dataset("maelic/PSG-coco-format") # Recover label maps from the embedded metadata meta = json.loads(ds["train"].info.description) cat_id2name = {c["id"]: c["name"] for c in meta["categories"]} pred_id2name = {c["id"]: c["name"] for c in meta["rel_categories"]} sample = ds["train"][0] image = sample["image"] # PIL Image for obj in sample["objects"]: print(cat_id2name[obj["category_id"]], obj["bbox"]) for rel in sample["relations"]: print(rel["subject_id"], "--", pred_id2name[rel["predicate_id"]], "->", rel["object_id"]) ``` --- ## Citation If you use this dataset, please cite the original PSG paper: ```bibtex @inproceedings{yang2022panoptic, title = {Panoptic scene graph generation}, author = {Yang, Jingkang and Ang, Yi Zhe and Guo, Zujin and Zhou, Kaiyang and Zhang, Wayne and Liu, Ziwei}, booktitle = {European conference on computer vision}, pages = {178--196}, year = {2022}, organization = {Springer}, } ``` And the REACT paper if you use the SGG-Benchmark models: ```bibtex @inproceedings{Neau_2025_BMVC, author = {Ma\"elic Neau and Paulo Eduardo Santos and Anne-Gwenn Bosser and Akihiro Sugimoto and Cedric Buche}, title = {REACT: Real-time Efficiency and Accuracy Compromise for Tradeoffs in Scene Graph Generation}, booktitle = {36th British Machine Vision Conference 2025, {BMVC} 2025, Sheffield, UK, November 24-27, 2025}, publisher = {BMVA}, year = {2025}, url = {https://bmva-archive.org.uk/bmvc/2025/assets/papers/Paper_239/paper.pdf}, } ``` --- ## License This dataset inherits the **MIT** license of the original PSG benchmark. See the [MIT License](https://opensource.org/licenses/MIT) for details.