|
|
--- |
|
|
license: cc-by-4.0 |
|
|
--- |
|
|
# Dataset Description |
|
|
**RoboAfford** is a large-scale dataset with dense, affordance-aware annotations for instruction grounded manipulation. |
|
|
This dataset contains **819,987 images** and **1.9 million QA pairs**, unifying object affordances and spatial affordances to support interaction-centric learning in robotics. |
|
|
|
|
|
# Dataset Composition |
|
|
**RoboAfford** aggregates images from multiple datasets and generates QA pairs to provide a comprehensive dataset for affordance understanding. |
|
|
It consists of the following components: |
|
|
|
|
|
- **LVIS_absxy_513K.json**: 513K object detection QA pairs for 152,152 images sourced from [LVIS](https://www.lvisdataset.org/) |
|
|
|
|
|
- **pointing_absxy_190K.json**: 190K object pointing QA pairs for 63,907 images selected from [PixMo-Points](https://huggingface.co/datasets/allenai/pixmo-points) |
|
|
|
|
|
- **object_affordance_prediction_absxy_561K.json**: 561K object affordance prediction QA pairs for 45,790 images sourced from [PACO-LVIS](https://github.com/facebookresearch/paco) |
|
|
|
|
|
- **object_ref_max_points_10_absxy_347K.json**: 347K object reference QA pairs for 287,956 images sourced from [RoboPoint-data](https://huggingface.co/datasets/wentao-yuan/robopoint-data) |
|
|
|
|
|
- **region_ref_max_points_10_absxy_320K.json**: 320K region reference QA pairs for 270,182 images sourced from [RoboPoint-data](https://huggingface.co/datasets/wentao-yuan/robopoint-data) |
|
|
|
|
|
# Dataset Format |
|
|
Each json file contains a list of structured conversations with image references. An example QA pair and its associated image are shown as follows: |
|
|
``` |
|
|
{ |
|
|
"id": "paco_403013", |
|
|
"image": "train2017/000000403013.jpg", |
|
|
"conversations": [ |
|
|
{ |
|
|
"from": "human", |
|
|
"value": "<image>\nWhat appliance can be used to heat food quickly? Your answer should be formatted as a list of tuples, i.e. [(x1, y1, x2, y2), ...], where each tuple contains the x, y coordinates of the top-left corner and bottom-right corner of the bounding box. The coordinates should be rounded to two decimal places, indicating the absolute pixel locations of the points in the image." |
|
|
}, |
|
|
{ |
|
|
"from": "gpt", |
|
|
"value": "[(258.14, 174.87, 283.79, 222.17)]" |
|
|
}, |
|
|
{ |
|
|
"from": "human", |
|
|
"value": "What appliance can be used to heat food quickly? Your answer should be formatted as a list of tuples, i.e. [(x1, y1), (x2, y2), ...], where each tuple contains the x and y coordinates of a point on the object. The coordinates should be rounded to two decimal places, indicating the absolute pixel locations of the points in the image." |
|
|
}, |
|
|
{ |
|
|
"from": "gpt", |
|
|
"value": "[(258.61, 213.0)]" |
|
|
} |
|
|
] |
|
|
} |
|
|
``` |
|
|
|
|
|
 |
|
|
|
|
|
# Evaluation |
|
|
For benchmarking protocols and evaluation metrics, please refer to [RoboAfford-Eval](https://huggingface.co/datasets/tyb197/RoboAfford-Eval) and [https://github.com/tyb197/RoboAfford](https://github.com/tyb197/RoboAfford) for more detailed information. |