FoldNet / README.md
Bowie375's picture
Feat: Update README.md
3221d8e verified
---
license: cc-by-4.0
Modalities:
- Image
- 3D
language:
- en
size_categories:
- 1K<n<10K
tags:
- 3d
- blender
- vision
- template
pretty_name: FoldNet
---
# FoldNet Dataset
<div style="display: flex; justify-content: center; align-items: center; margin: 0 0;">
<img src="https://raw.githubusercontent.com/chen01yx/FoldNet_code/main/asset/fig/teaser.png" alt="Teaser Image" style="max-width: 100%; border-radius: 10px; box-shadow: 0 4px 8px rgba(0, 0, 0, 0.1);">
</div>
<p align="center" style="display: flex; justify-content: center; align-items: center; gap: 15px;">
<a href="https://pku-epic.github.io/FoldNet/"><img src="https://img.shields.io/badge/-project-yellow?logo=githubpages&logoSize=auto&labelColor=grey" alt="Project Page"></a>
<a href="https://ieeexplore.ieee.org/document/11359673"><img src="https://img.shields.io/badge/-paper-green?logo=ieee&logoSize=auto&labelColor=grey" alt="Paper"></a>
<a href="https://github.com/chen01yx/FoldNet_code"><img src="https://img.shields.io/badge/github-code-blue?logo=github&logoSize=auto" alt="Github Code"></a>
</p>
<strong>FoldNet</strong> is a high-fidelity synthetic dataset featuring over 4,000 unique meshes across four distinct garment categories. Designed to support a wide range of downstream applications—including robotic folding and cloth manipulation—FoldNet provides physically plausible geometries paired with photorealistic textures.
## 🔑 Key Features
- **Diverse Cloth Categories:** including tshirt, trousers, vest and hoodie.
- **High Quality Mesh:**
- Watertight and manifold meshes.
- No self-intersections.
- Configurable resolution with adjustable vertex density and face sizing.
- **Diverse and Realistic Textures:** High-quality textures procedurally generated via Stable-Diffusion-3.5
- **Rich Annotation:**
- Automatically labeled manipulable keypoints for robotic interaction.
- Pre-computed UV mapping for seamless texturing.
- **Highly Scalable:** A robust procedural framework capable of generating an infinite variety of plausible garment shapes.
## 🔥 Get started
To download the full dataset, you can use the following code. If you encounter any issues, please refer to the official Hugging Face documentation.
```
# Make sure you have git-lfs installed (https://git-lfs.com)
git lfs install
# When prompted for a password, use an access token with write permissions.
# Generate one from your settings: https://huggingface.co/settings/tokens
git clone https://huggingface.co/datasets/Bowie375/FoldNet
# If you want to clone without large files - just their pointers
GIT_LFS_SKIP_SMUDGE=1 git clone https://huggingface.co/datasets/Bowie375/FoldNet
```
## 🗂️ Dataset Structure
Under `mesh` directory, we provide raw cloth meshes with default texture:
```
mesh
├── tshirt_sp # category
│ ├── 0
│ │ ├── mesh.obj # generated mesh
│ │ ├── mesh.key.obj # the same mesh with keypoints marked red
│ │ ├── mesh_info.json # the configurations of the mesh, like edge length and keypoint index
│ │ ├── material.mtl
│ │ ├── material.png # default texture
│ ├── 1
│ │ └── ...
│ ├── ...
├── trousers
│ ├── ...
├── vest_close
│ ├── ...
├── hooded_close
│ ├── ...
```
## 🛠️ Dataset Creation
**FoldNet** features a fully automated end-to-end data generation pipeline. Our framework procedurally synthesizes garment geometries, applies AI-driven texturing, and generates ground-truth annotations without human intervention.
For technical implementation details, source code, and step-by-step instructions to reproduce the dataset, please visit the [FoldNet GitHub Repository](https://github.com/chen01yx/FoldNet_code).
## 📅 TODO List
- [x] [2026.2] Released: 4k synthetic 3D garment assets (across 4 cloth categories). The directory is **mesh**.
- [ ] To be released: textured cloth data.
## Citation
```bibtex
@article{11359673,
author={Chen, Yuxing and Xiao, Bowen and Wang, He},
journal={IEEE Robotics and Automation Letters},
title={FoldNet: Learning Generalizable Closed-Loop Policy for Garment Folding via Keypoint-Driven Asset and Demonstration Synthesis},
year={2026},
volume={},
number={},
pages={1-8},
keywords={Clothing;Geometry;Imitation learning;Annotations;Trajectory;Training;Synthetic data;Pipelines;Grasping;Filtering;Bimanual manipulation;deep learning for visual perception;deep learning in grasping and manipulation},
doi={10.1109/LRA.2026.3656770}}
```