Datasets:
File size: 4,287 Bytes
9fde26d 3738f66 9fde26d 5760f65 9fde26d 3738f66 9fde26d 3738f66 5539c27 3738f66 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 | ---
license: apache-2.0
language:
- en
tags:
- articulated object
size_categories:
- n<1K
---
# Dataset Card for XieNet
This is the **repaired** version of [GAPartNet](https://arxiv.org/abs/2211.05272) dataset, which we use as the simulation dataset for [Vi-TacMan](https://vi-tacman.github.io/).
## Description
We identified numerous object meshes in the original dataset that lack proper cap geometry, so we manually repaired these meshes to ensure completeness. The following images (object id: 47296) exemplify the type of geometric defects found and our corrections:
<div align="center">
<table>
<tr>
<td align="center">
<img src="figure/gapartnet_47296.png" width="300" alt="GAPartNet Original"/>
<br/>
GAPartNet (Original)
</td>
<td align="center">
<img src="figure/xienet_47296.png" width="300" alt="XieNet Repaired"/>
<br/>
XieNet (Repaired)
</td>
</tr>
</table>
</div>
We also provide the data generation code, which can be used to reproduce the simulated data presented in our paper [Vi-TacMan](https://vi-tacman.github.io/).
We sincerely thank the previous works ([SAPIEN](https://arxiv.org/abs/2003.08515), [PartNet](https://arxiv.org/abs/1812.02713), [GAPartNet](https://arxiv.org/abs/2211.05272)) and hope our repaired dataset can help advance this community.
## Usage
### Installation
First, install the required dependencies:
```bash
pip install -r requirements.txt
```
**Requirements:**
- Python 3.10
- SAPIEN 3.0.1
### Data Generation
The main script `main.py` generates simulated data by rendering articulated objects from multiple camera viewpoints with different articulation states.
#### Basic Usage
```bash
python main.py \
--data_root_dir /path/to/XieNet \
--save_dir /path/to/output/directory
```
#### Full Command Line Options
```bash
python main.py \
--data_root_dir /path/to/XieNet/dataset \ # Path to the XieNet dataset root
--save_dir /path/to/output/directory \ # Output directory for rendered data
--seed 42 \ # Random seed (default: 42)
--render_width 640 \ # Render width (default: 640)
--render_height 576 \ # Render height (default: 576)
--fovy 65.0 \ # Field of view in degrees (default: 65.0)
--near 0.01 \ # Near clipping plane (default: 0.01)
--far 4.0 \ # Far clipping plane (default: 4.0)
--enable_rt \ # Enable ray tracing (optional)
--min_movable_area 4096 \ # Minimum area for movable parts (default: 4096)
--max_flow_dist 0.1 \ # Maximum flow distance (default: 0.1)
--save_vis # Save visualization images (default: True)
```
#### Supported Object Categories
The data generation focuses on the following articulated object categories, for which we provide repaired meshes:
- Dishwasher
- Door
- Microwave
- Oven
- Refrigerator
- Safe
- StorageFurniture
- Table
- Toilet
- TrashCan
- WashingMachine
#### Output Data Format
For each object and camera viewpoint, the script generates:
- `pcd_camera.npy`: Structured numpy array containing:
- `point`: 3D point coordinates in camera frame
- `rgb`: RGB color values
- `articulation_flow`: 3D flow vectors for articulation motion
- `mask_holdable`: Binary mask for holdable parts
- `mask_movable`: Binary mask for movable parts
- `mask_ground`: Binary mask for ground plane
- `camera_pose.txt`: 4x4 camera pose matrix
- `camera_intrinsics.txt`: 3x3 camera intrinsic matrix
- `vis/` folder (if `--save_vis` is enabled): Visualization images including color, depth, masks, and flow visualizations
## Citation
If you find this dataset beneficial, please cite our research paper as follows:
```bibtex
@inproceedings{cui2026vitacman,
title = {Vi-{T}ac{M}an: Articulated Object Manipulation via Vision and Touch},
author = {Cui, Leiyao and Zhao, Zihang and Xie, Sirui and Zhang, Wenhuan and Han, Zhi and Zhu, Yixin},
booktitle = {IEEE International Conference on Robotics and Automation (ICRA)},
year = {2026},
organization = {IEEE}
}
```
|