Add paper link, project page, code, and task category
#2
by nielsr HF Staff - opened
README.md
CHANGED
|
@@ -1,22 +1,28 @@
|
|
| 1 |
---
|
| 2 |
license: mit
|
| 3 |
-
pretty_name: AniGen Sample Data
|
| 4 |
size_categories:
|
| 5 |
-
|
|
|
|
| 6 |
tags:
|
| 7 |
-
|
| 8 |
-
|
|
|
|
|
|
|
| 9 |
configs:
|
| 10 |
-
|
| 11 |
-
|
| 12 |
-
|
| 13 |
-
|
| 14 |
-
|
| 15 |
---
|
| 16 |
|
| 17 |
# AniGen Sample Data
|
| 18 |
|
| 19 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 20 |
|
| 21 |
## What Is Included
|
| 22 |
|
|
@@ -54,4 +60,38 @@ For a row with sample key `<file_identifier>`:
|
|
| 54 |
- voxel files: `voxels/<file_identifier>.ply` and `voxels/<file_identifier>_skeleton.ply`
|
| 55 |
- image feature: `features/dinov2_vitl14_reg/<file_identifier>.npz`
|
| 56 |
- mesh latents: files under `latents/*/<file_identifier>.npz`
|
| 57 |
-
- structure latents: files under `ss_latents/*/<file_identifier>.npz`
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
license: mit
|
|
|
|
| 3 |
size_categories:
|
| 4 |
+
- n<1K
|
| 5 |
+
pretty_name: AniGen Sample Data
|
| 6 |
tags:
|
| 7 |
+
- 3d
|
| 8 |
+
- image
|
| 9 |
+
task_categories:
|
| 10 |
+
- image-to-3d
|
| 11 |
configs:
|
| 12 |
+
- config_name: default
|
| 13 |
+
default: true
|
| 14 |
+
data_files:
|
| 15 |
+
- split: train
|
| 16 |
+
path: samples.csv
|
| 17 |
---
|
| 18 |
|
| 19 |
# AniGen Sample Data
|
| 20 |
|
| 21 |
+
[Paper](https://huggingface.co/papers/2604.08746) | [Project Page](https://yihua7.github.io/AniGen_web/) | [GitHub](https://github.com/VAST-AI-Research/AniGen)
|
| 22 |
+
|
| 23 |
+
This directory is a compact example subset of the AniGen training dataset, as presented in the paper [AniGen: Unified $S^3$ Fields for Animatable 3D Asset Generation](https://huggingface.co/papers/2604.08746).
|
| 24 |
+
|
| 25 |
+
AniGen is a unified framework that directly generates animate-ready 3D assets conditioned on a single image by representing shape, skeleton, and skinning as mutually consistent $S^3$ Fields.
|
| 26 |
|
| 27 |
## What Is Included
|
| 28 |
|
|
|
|
| 60 |
- voxel files: `voxels/<file_identifier>.ply` and `voxels/<file_identifier>_skeleton.ply`
|
| 61 |
- image feature: `features/dinov2_vitl14_reg/<file_identifier>.npz`
|
| 62 |
- mesh latents: files under `latents/*/<file_identifier>.npz`
|
| 63 |
+
- structure latents: files under `ss_latents/*/<file_identifier>.npz`
|
| 64 |
+
|
| 65 |
+
## Sample Usage (Training)
|
| 66 |
+
|
| 67 |
+
According to the [official repository](https://github.com/VAST-AI-Research/AniGen), you can use this data for training by following these stages:
|
| 68 |
+
|
| 69 |
+
```bash
|
| 70 |
+
# Stage 1: Skin AutoEncoder
|
| 71 |
+
python train.py --config configs/anigen_skin_ae.json --output_dir outputs/anigen_skin_ae
|
| 72 |
+
|
| 73 |
+
# Stage 2: Sparse Structure DAE
|
| 74 |
+
python train.py --config configs/ss_dae.json --output_dir outputs/ss_dae
|
| 75 |
+
|
| 76 |
+
# Stage 3: Structured Latent DAE
|
| 77 |
+
python train.py --config configs/slat_dae.json --output_dir outputs/slat_dae
|
| 78 |
+
|
| 79 |
+
# Stage 4: SS Flow Matching (image-conditioned generation)
|
| 80 |
+
python train.py --config configs/ss_flow_duet.json --output_dir outputs/ss_flow_duet
|
| 81 |
+
|
| 82 |
+
# Stage 5: SLAT Flow Matching (image-conditioned generation)
|
| 83 |
+
python train.py --config configs/slat_flow_auto.json --output_dir outputs/slat_flow_auto
|
| 84 |
+
```
|
| 85 |
+
|
| 86 |
+
## Citation
|
| 87 |
+
|
| 88 |
+
```bibtex
|
| 89 |
+
@article{huang2026anigen,
|
| 90 |
+
title = {AniGen: Unified $S^3$ Fields for Animatable 3D Asset Generation},
|
| 91 |
+
author = {Huang, Yi-Hua and Zhou, Zi-Xin and He, Yuting and Chang, Chirui
|
| 92 |
+
and Pu, Cheng-Feng and Yang, Ziyi and Guo, Yuan-Chen
|
| 93 |
+
and Cao, Yan-Pei and Qi, Xiaojuan},
|
| 94 |
+
journal = {ACM SIGGRAPH},
|
| 95 |
+
year = {2026}
|
| 96 |
+
}
|
| 97 |
+
```
|