Datasets:
ArXiv:
License:
| license: mit | |
| pretty_name: garment-tracking | |
| task_categories: | |
| - robotics | |
| # Dataset Card for VR-Folding Dataset | |
| ## Table of Contents | |
| - [Table of Contents](#table-of-contents) | |
| - [Dataset Description](#dataset-description) | |
| - [Dataset Summary](#dataset-summary) | |
| - [Dataset Structure](#dataset-structure) | |
| - [Dataset Example](#dataset-example) | |
| ## Dataset Description | |
| - **Homepage:** https://garment-tracking.robotflow.ai | |
| - **Repository:** [GitHub](https://github.com/xiaoxiaoxh/GarmentTracking) | |
| - **Paper:** [GarmentTracking: Category-Level Garment Pose Tracking](https://arxiv.org/pdf/2303.13913.pdf) | |
| - **Point of Contact:** | |
| ## Dataset Summary | |
|  | |
| This is the **VR-Folding** dataset created by the CVPR 2023 paper [GarmentTracking: Category-Level Garment Pose Tracking](https://garment-tracking.robotflow.ai). | |
| This dataset is recorded with a system called [VR-Garment](https://github.com/xiaoxiaoxh/VR-Garment), which is a garment-hand interaction environment based on Unity. | |
| To download the dataset, use the following shell snippet: | |
| ``` | |
| git lfs install | |
| git clone https://huggingface.co/datasets/robotflow/garment-tracking | |
| # if you want to clone without large files – just their pointers | |
| # prepend your git clone with the following env var: GIT_LFS_SKIP_SMUDGE=1 | |
| # merge multiple .zip files (e.g. folding) into one .zip file | |
| cd data/folding | |
| cat folding_dataset.z* > folding_dataset.zip | |
| # unzip | |
| unzip folding_dataset.zip | |
| ``` | |
| All the data are stored in [zarr](https://zarr.readthedocs.io/en/stable/) format. | |
| ## Dataset Structure | |
| Here is the detailed stucture of a data example ([zarr](https://zarr.readthedocs.io/en/stable/) format) of one frame: | |
| ``` | |
| 00068_Tshirt_000000_000000 | |
| ├── grip_vertex_id | |
| │ ├── left_grip_vertex_id (1,) int32 | |
| │ └── right_grip_vertex_id (1,) int32 | |
| ├── hand_pose | |
| │ ├── left_hand_euler (25, 3) float32 | |
| │ ├── left_hand_pos (25, 3) float32 | |
| │ ├── right_hand_euler (25, 3) float32 | |
| │ └── right_hand_pos (25, 3) float32 | |
| ├── marching_cube_mesh | |
| │ ├── is_vertex_on_surface (6410,) bool | |
| │ ├── marching_cube_faces (12816, 3) int32 | |
| │ └── marching_cube_verts (6410, 3) float32 | |
| ├── mesh | |
| │ ├── cloth_faces_tri (8312, 3) int32 | |
| │ ├── cloth_nocs_verts (4434, 3) float32 | |
| │ └── cloth_verts (4434, 3) float32 | |
| └── point_cloud | |
| ├── cls (30000,) uint8 | |
| ├── nocs (30000, 3) float16 | |
| ├── point (30000, 3) float16 | |
| ├── rgb (30000, 3) uint8 | |
| └── sizes (4,) int64 | |
| ``` | |
| Specifically, we render 4-view RGB-D images with Unity and generate concated point clouds for each frame. Here `grip_vertex_id` is the vertex index list of the grasped points of the mesh. | |
| # Dataset Example | |
| Please see [example](data/data_examples/README.md) for example data and visualization scripts. | |
| Here are two video examples for flattening and folding task. | |
|  | |
|  |