Update README.md
Browse files
README.md
CHANGED
|
@@ -1,3 +1,50 @@
|
|
| 1 |
-
---
|
| 2 |
-
license: cc-by-4.0
|
| 3 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: cc-by-4.0
|
| 3 |
+
---
|
| 4 |
+
# Dataset Description
|
| 5 |
+
RoboAfford is a large-scale dataset with dense, affordance-aware annotations for instruction grounded manipulation.
|
| 6 |
+
This dataset contains 819,987 images and 1.9 million QA pairs, unifying object affordances and spatial affordances to support interaction-centric learning in robotics.
|
| 7 |
+
|
| 8 |
+
# Dataset Composition
|
| 9 |
+
RoboAfford aggregates images from multiple datasets and generates QA pairs to provide a comprehensive dataset for affordance understanding.
|
| 10 |
+
It consists of the following components:
|
| 11 |
+
|
| 12 |
+
- **LVIS_absxy_513K.json**: 513K object detection QA pairs for 152,152 images sourced from [LVIS](https://www.lvisdataset.org/)
|
| 13 |
+
|
| 14 |
+
- **pointing_absxy_190K.json**: 190K object pointing QA pairs for 63,907 images selected from [PixMo-Points](https://huggingface.co/datasets/allenai/pixmo-points)
|
| 15 |
+
|
| 16 |
+
- **object_affordance_prediction_absxy_561K.json**: 561K object affordance prediction QA pairs for 45,790 images sourced from [PACO-LVIS](https://github.com/facebookresearch/paco)
|
| 17 |
+
|
| 18 |
+
- **object_ref_max_points_10_absxy_347K.json**: 347K object reference QA pairs for 287,956 images sourced from [RoboPoint](https://huggingface.co/datasets/wentao-yuan/robopoint-data)
|
| 19 |
+
|
| 20 |
+
- **region_ref_max_points_10_absxy_320K.json**: 320K region reference QA pairs for 270,182 images sourced from [RoboPoint](https://huggingface.co/datasets/wentao-yuan/robopoint-data)
|
| 21 |
+
|
| 22 |
+
# Dataset Format
|
| 23 |
+
Each json file contains a list of structured conversations with image references. The QA pairs follows the format:
|
| 24 |
+
```
|
| 25 |
+
{
|
| 26 |
+
"id": "paco_403013",
|
| 27 |
+
"image": "train2017/000000403013.jpg",
|
| 28 |
+
"conversations": [
|
| 29 |
+
{
|
| 30 |
+
"from": "human",
|
| 31 |
+
"value": "<image>\nWhat appliance can be used to heat food quickly? Your answer should be formatted as a list of tuples, i.e. [(x1, y1, x2, y2), ...], where each tuple contains the x, y coordinates of the top-left corner and bottom-right corner of the bounding box. The coordinates should be rounded to two decimal places, indicating the absolute pixel locations of the points in the image."
|
| 32 |
+
},
|
| 33 |
+
{
|
| 34 |
+
"from": "gpt",
|
| 35 |
+
"value": "[(258.14, 174.87, 283.79, 222.17)]"
|
| 36 |
+
},
|
| 37 |
+
{
|
| 38 |
+
"from": "human",
|
| 39 |
+
"value": "What appliance can be used to heat food quickly? Your answer should be formatted as a list of tuples, i.e. [(x1, y1), (x2, y2), ...], where each tuple contains the x and y coordinates of a point on the object. The coordinates should be rounded to two decimal places, indicating the absolute pixel locations of the points in the image."
|
| 40 |
+
},
|
| 41 |
+
{
|
| 42 |
+
"from": "gpt",
|
| 43 |
+
"value": "[(258.61, 213.0)]"
|
| 44 |
+
}
|
| 45 |
+
]
|
| 46 |
+
}
|
| 47 |
+
```
|
| 48 |
+
|
| 49 |
+
# Evaluation
|
| 50 |
+
For benchmarking protocols and evaluation metrics, please refer to [RoboAfford-Eval](https://huggingface.co/datasets/tyb197/RoboAfford-Eval) and [https://github.com/tyb197/RoboAfford](https://github.com/tyb197/RoboAfford) for more detailed information.
|