| | --- |
| | license: apache-2.0 |
| | language: |
| | - en |
| | tags: |
| | - articulated object |
| | size_categories: |
| | - n<1K |
| | --- |
| | |
| | # Dataset Card for XieNet |
| |
|
| | This is the **repaired** version of [GAPartNet](https://arxiv.org/abs/2211.05272) dataset, which we use as the simulation dataset for [Vi-TacMan](https://vi-tacman.github.io/). |
| |
|
| | ## Description |
| |
|
| | We identified numerous object meshes in the original dataset that lack proper cap geometry, so we manually repaired these meshes to ensure completeness. The following images (object id: 47296) exemplify the type of geometric defects found and our corrections: |
| |
|
| | <div align="center"> |
| | <table> |
| | <tr> |
| | <td align="center"> |
| | <img src="figure/gapartnet_47296.png" width="300" alt="GAPartNet Original"/> |
| | <br/> |
| | GAPartNet (Original) |
| | </td> |
| | <td align="center"> |
| | <img src="figure/xienet_47296.png" width="300" alt="XieNet Repaired"/> |
| | <br/> |
| | XieNet (Repaired) |
| | </td> |
| | </tr> |
| | </table> |
| | </div> |
| | |
| | We also provide the data generation code, which can be used to reproduce the simulated data presented in our paper [Vi-TacMan](https://vi-tacman.github.io/). |
| |
|
| | We sincerely thank the previous works ([SAPIEN](https://arxiv.org/abs/2003.08515), [PartNet](https://arxiv.org/abs/1812.02713), [GAPartNet](https://arxiv.org/abs/2211.05272)) and hope our repaired dataset can help advance this community. |
| |
|
| | ## Usage |
| |
|
| | ### Installation |
| |
|
| | First, install the required dependencies: |
| |
|
| | ```bash |
| | pip install -r requirements.txt |
| | ``` |
| |
|
| | **Requirements:** |
| | - Python 3.10 |
| | - SAPIEN 3.0.1 |
| |
|
| | ### Data Generation |
| |
|
| | The main script `main.py` generates simulated data by rendering articulated objects from multiple camera viewpoints with different articulation states. |
| |
|
| | #### Basic Usage |
| |
|
| | ```bash |
| | python main.py \ |
| | --data_root_dir /path/to/XieNet \ |
| | --save_dir /path/to/output/directory |
| | ``` |
| |
|
| | #### Full Command Line Options |
| |
|
| | ```bash |
| | python main.py \ |
| | --data_root_dir /path/to/XieNet/dataset \ # Path to the XieNet dataset root |
| | --save_dir /path/to/output/directory \ # Output directory for rendered data |
| | --seed 42 \ # Random seed (default: 42) |
| | --render_width 640 \ # Render width (default: 640) |
| | --render_height 576 \ # Render height (default: 576) |
| | --fovy 65.0 \ # Field of view in degrees (default: 65.0) |
| | --near 0.01 \ # Near clipping plane (default: 0.01) |
| | --far 4.0 \ # Far clipping plane (default: 4.0) |
| | --enable_rt \ # Enable ray tracing (optional) |
| | --min_movable_area 4096 \ # Minimum area for movable parts (default: 4096) |
| | --max_flow_dist 0.1 \ # Maximum flow distance (default: 0.1) |
| | --save_vis # Save visualization images (default: True) |
| | ``` |
| |
|
| | #### Supported Object Categories |
| |
|
| | The data generation focuses on the following articulated object categories, for which we provide repaired meshes: |
| | - Dishwasher |
| | - Door |
| | - Microwave |
| | - Oven |
| | - Refrigerator |
| | - Safe |
| | - StorageFurniture |
| | - Table |
| | - Toilet |
| | - TrashCan |
| | - WashingMachine |
| |
|
| | #### Output Data Format |
| |
|
| | For each object and camera viewpoint, the script generates: |
| |
|
| | - `pcd_camera.npy`: Structured numpy array containing: |
| | - `point`: 3D point coordinates in camera frame |
| | - `rgb`: RGB color values |
| | - `articulation_flow`: 3D flow vectors for articulation motion |
| | - `mask_holdable`: Binary mask for holdable parts |
| | - `mask_movable`: Binary mask for movable parts |
| | - `mask_ground`: Binary mask for ground plane |
| | - `camera_pose.txt`: 4x4 camera pose matrix |
| | - `camera_intrinsics.txt`: 3x3 camera intrinsic matrix |
| | - `vis/` folder (if `--save_vis` is enabled): Visualization images including color, depth, masks, and flow visualizations |
| |
|
| | ## Citation |
| |
|
| | If you find this dataset beneficial, please cite our research paper as follows: |
| |
|
| | ```bibtex |
| | @inproceedings{cui2026vitacman, |
| | title = {Vi-{T}ac{M}an: Articulated Object Manipulation via Vision and Touch}, |
| | author = {Cui, Leiyao and Zhao, Zihang and Xie, Sirui and Zhang, Wenhuan and Han, Zhi and Zhu, Yixin}, |
| | booktitle = {IEEE International Conference on Robotics and Automation (ICRA)}, |
| | year = {2026}, |
| | organization = {IEEE} |
| | } |
| | ``` |
| |
|