Datasets:
File size: 5,704 Bytes
6f22a43 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 | ---
license: mit
task_categories:
- object-detection
tags:
- scene-graph
- visual-relationship-detection
- panoptic-scene-graph
- coco-format
language:
- en
pretty_name: PSG — Panoptic Scene Graph (COCO format)
size_categories:
- 10K<n<100K
---
# PSG — Panoptic Scene Graph (COCO format)
This dataset is a reformatted version of the **Panoptic Scene Graph (PSG)** benchmark
([Yang et al., NeurIPS 2022](https://arxiv.org/abs/2207.11247)) in standard COCO-JSON
format, ready for use with object detection and scene graph generation pipelines.
It was produced as part of the
[SGG-Benchmark](https://github.com/Maelic/SGG-Benchmark) framework and used to train
the models described in the **REACT** paper
([Neau et al., BMVC 2025](https://arxiv.org/abs/2405.16116)).
/!\ Disclaimer: this dataset does NOT contain original segmentation masks, but only
bounding boxes and category labels. Thus this is NOT a panoptic dataset, but rather a
scene graph dataset that can only be used to train bounding-box-based SGG models.
The original PSG dataset can be downloaded from the [PSG project page](https://github.com/Jingkang50/OpenPSG).
---
## Annotation overview
Each image comes with:
- **Object bounding boxes** — 133 COCO object categories.
- **Scene-graph relations** — 56 predicate categories connecting pairs of objects as
directed `(subject, predicate, object)` triplets.

*Four random validation images with bounding boxes (coloured by category) and
relation arrows (yellow, labelled with the predicate name).*
---
## Dataset statistics
| Split | Images | Object annotations | Relations |
|-------|-------:|-------------------:|----------:|
| train | 45 564 | 494 213 | 254 214 |
| val | 1 000 | 19 039 | 7 458 |
| test | 2 186 | 24 910 | 13 705 |
---
## Object categories (133)
Standard 133-class COCO panoptic vocabulary: *person, bicycle, car, motorcycle,
airplane, bus, train, truck, boat, traffic light, …* (full list embedded in
`dataset_info.description`).
## Predicate categories (56)
> over · in front of · beside · on · in · attached to · hanging from · on back of ·
> falling off · going down · painted on · walking on · running on · crossing ·
> standing on · lying on · sitting on · flying over · jumping over · jumping from ·
> wearing · holding · carrying · looking at · guiding · kissing · eating · drinking ·
> feeding · biting · catching · picking · playing with · chasing · climbing ·
> cleaning · playing · touching · pushing · pulling · opening · cooking · talking to ·
> throwing · slicing · driving · riding · parked on · driving on · about to hit ·
> kicking · swinging · entering · exiting · enclosing · leaning on
---
## Dataset structure
```python
DatasetDict({
train: Dataset({
features: ['image', 'image_id', 'width', 'height', 'file_name',
'objects', 'relations'],
num_rows: 45564
}),
val: Dataset({
features: ['image', 'image_id', 'width', 'height', 'file_name',
'objects', 'relations'],
num_rows: 1000
}),
test: Dataset({
features: ['image', 'image_id', 'width', 'height', 'file_name',
'objects', 'relations'],
num_rows: 2186
}),
})
```
Each row contains:
| Field | Type | Description |
|-------|------|-------------|
| `image` | `Image` | PIL image |
| `image_id` | `int` | Original COCO image id |
| `width` / `height` | `int` | Image dimensions |
| `file_name` | `str` | Original filename |
| `objects` | `List[dict]` | `{id, category_id, bbox (xywh), area, iscrowd, segmentation}` |
| `relations` | `List[dict]` | `{id, subject_id, object_id, predicate_id}` — ids refer to `objects[*].id` |
---
## Usage
```python
from datasets import load_dataset
import json
ds = load_dataset("maelic/PSG-coco-format")
# Recover label maps from the embedded metadata
meta = json.loads(ds["train"].info.description)
cat_id2name = {c["id"]: c["name"] for c in meta["categories"]}
pred_id2name = {c["id"]: c["name"] for c in meta["rel_categories"]}
sample = ds["train"][0]
image = sample["image"] # PIL Image
for obj in sample["objects"]:
print(cat_id2name[obj["category_id"]], obj["bbox"])
for rel in sample["relations"]:
print(rel["subject_id"], "--", pred_id2name[rel["predicate_id"]], "->", rel["object_id"])
```
---
## Citation
If you use this dataset, please cite the original PSG paper:
```bibtex
@inproceedings{yang2022panoptic,
title = {Panoptic scene graph generation},
author = {Yang, Jingkang and Ang, Yi Zhe and Guo, Zujin and Zhou, Kaiyang
and Zhang, Wayne and Liu, Ziwei},
booktitle = {European conference on computer vision},
pages = {178--196},
year = {2022},
organization = {Springer},
}
```
And the REACT paper if you use the SGG-Benchmark models:
```bibtex
@inproceedings{Neau_2025_BMVC,
author = {Ma\"elic Neau and Paulo Eduardo Santos and Anne-Gwenn Bosser
and Akihiro Sugimoto and Cedric Buche},
title = {REACT: Real-time Efficiency and Accuracy Compromise for Tradeoffs
in Scene Graph Generation},
booktitle = {36th British Machine Vision Conference 2025, {BMVC} 2025,
Sheffield, UK, November 24-27, 2025},
publisher = {BMVA},
year = {2025},
url = {https://bmva-archive.org.uk/bmvc/2025/assets/papers/Paper_239/paper.pdf},
}
```
---
## License
This dataset inherits the **MIT** license of the original PSG benchmark.
See the [MIT License](https://opensource.org/licenses/MIT) for details.
|