Update README.md
Browse files
README.md
CHANGED
|
@@ -1,3 +1,53 @@
|
|
| 1 |
-
---
|
| 2 |
-
license: apache-2.0
|
| 3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: apache-2.0
|
| 3 |
+
task_categories:
|
| 4 |
+
- robotics
|
| 5 |
+
---
|
| 6 |
+
|
| 7 |
+
# MemoryBench Dataset
|
| 8 |
+
|
| 9 |
+
MemoryBench is a benchmark dataset designed to evaluate spatial memory and action recall in robotic manipulation. This dataset accompanies the **SAM2Act+** framework, introduced in the paper *SAM2Act: Integrating Visual Foundation Model with A Memory Architecture for Robotic Manipulation*. For detailed task descriptions and more information about this paper, please visit SAM2Act's [website](https://sam2act.github.io).
|
| 10 |
+
|
| 11 |
+
The dataset contains scripted demonstrations for three memory-dependent tasks designed in RLBench (same version as the one used in [PerAct](https://peract.github.io/)):
|
| 12 |
+
|
| 13 |
+
- **Reopen Drawer**: Tests 3D spatial memory along the z-axis.
|
| 14 |
+
- **Put Block Back**: Evaluates 2D spatial memory along the x-y plane.
|
| 15 |
+
- **Rearrange Block**: Requires backward reasoning based on prior actions.
|
| 16 |
+
|
| 17 |
+
## Dataset Structure
|
| 18 |
+
|
| 19 |
+
The dataset is organized as follows:
|
| 20 |
+
```
|
| 21 |
+
data/
|
| 22 |
+
├── train/ # 100 episodes per task
|
| 23 |
+
└── test/ # 25 episodes per task
|
| 24 |
+
└── files/ # task files (.ttm & .py)
|
| 25 |
+
```
|
| 26 |
+
|
| 27 |
+
- **data/train/**: Contains three zip files, each corresponding to one of the three tasks. Each zip file contains **100** scripted demonstrations for training.
|
| 28 |
+
- **data/test/**: Contains the same three zip files, but each contains **25** held-out demonstrations for evaluation.
|
| 29 |
+
- **data/files/**: Includes necessary `.ttm` and `.py` files for running evaluation.
|
| 30 |
+
|
| 31 |
+
## Usage
|
| 32 |
+
|
| 33 |
+
This dataset is designed for use in the same manner as the RLBench 18 Tasks proposed by [PerAct](https://peract.github.io/). You can follow the same usage guidelines or stay updated with SAM2Act's [code repository](https://github.com/sam2act/sam2act) for further instructions.
|
| 34 |
+
|
| 35 |
+
## Acknowledgement
|
| 36 |
+
|
| 37 |
+
We would like to acknowledge [Haoquan Fang](https://hq-fang.github.io/) for leading the conceptualization of MemoryBench, providing key ideas and instructions for task design, and [Wilbert Pumacay](https://wpumacay.github.io/) for implementing the tasks and ensuring their seamless integration into the dataset. Their combined efforts, along with the oversight of [Jiafei Duan](https://duanjiafei.com/) and all co-authors, were essential in developing this benchmark for evaluating spatial memory in robotic manipulation.
|
| 38 |
+
|
| 39 |
+
## Citation
|
| 40 |
+
|
| 41 |
+
If you use this dataset, please cite the SAM2Act paper:
|
| 42 |
+
|
| 43 |
+
```bibtex
|
| 44 |
+
@misc{fang2025sam2act,
|
| 45 |
+
title={SAM2Act: Integrating Visual Foundation Model with A Memory Architecture for Robotic Manipulation},
|
| 46 |
+
author={Haoquan Fang and Markus Grotz and Wilbert Pumacay and Yi Ru Wang and Dieter Fox and Ranjay Krishna and Jiafei Duan},
|
| 47 |
+
year={2025},
|
| 48 |
+
eprint={2501.18564},
|
| 49 |
+
archivePrefix={arXiv},
|
| 50 |
+
primaryClass={cs.RO},
|
| 51 |
+
url={https://arxiv.org/abs/2501.18564},
|
| 52 |
+
}
|
| 53 |
+
```
|