| --- |
| license: mit |
| size_categories: |
| - n<1K |
| pretty_name: AniGen Sample Data |
| tags: |
| - 3d |
| - image |
| task_categories: |
| - image-to-3d |
| configs: |
| - config_name: default |
| default: true |
| data_files: |
| - split: train |
| path: samples.csv |
| --- |
| |
| # AniGen Sample Data |
|
|
| [Paper](https://huggingface.co/papers/2604.08746) | [Project Page](https://yihua7.github.io/AniGen_web/) | [GitHub](https://github.com/VAST-AI-Research/AniGen) |
|
|
| This directory is a compact example subset of the AniGen training dataset, as presented in the paper [AniGen: Unified $S^3$ Fields for Animatable 3D Asset Generation](https://huggingface.co/papers/2604.08746). |
|
|
| AniGen is a unified framework that directly generates animate-ready 3D assets conditioned on a single image by representing shape, skeleton, and skinning as mutually consistent $S^3$ Fields. |
|
|
| ## What Is Included |
|
|
| - 10 examples |
| - 10 unique raw assets |
| - Full cross-modal files for each example |
| - A subset `metadata.csv` with 10 rows |
|
|
| The retained directory layout follows the core structure of the reference test set: |
|
|
| ```text |
| raw/ |
| renders/ |
| renders_cond/ |
| skeleton/ |
| voxels/ |
| features/ |
| metadata.csv |
| statistics.txt |
| |
| latents/ (encoded by the trained slat auto-encoder) |
| ss_latents/ (encoded by the trained ss auto-encoder) |
| ``` |
|
|
| ## How To Read One Example |
|
|
| Each row in `metadata.csv` corresponds to one example identifier in the `sha256` column. In practice this value is the sample key used across modalities. |
|
|
| For a row with sample key `<file_identifier>`: |
|
|
| - raw asset: `local_path` field, for example `raw/<raw_file>` |
| - rendered views: `renders/<file_identifier>/` |
| - conditional rendered views: `renders_cond/<file_identifier>/` |
| - skeleton files: `skeleton/<file_identifier>/` |
| - voxel files: `voxels/<file_identifier>.ply` and `voxels/<file_identifier>_skeleton.ply` |
| - image feature: `features/dinov2_vitl14_reg/<file_identifier>.npz` |
| - mesh latents: files under `latents/*/<file_identifier>.npz` |
| - structure latents: files under `ss_latents/*/<file_identifier>.npz` |
|
|
| ## Sample Usage (Training) |
|
|
| According to the [official repository](https://github.com/VAST-AI-Research/AniGen), you can use this data for training by following these stages: |
|
|
| ```bash |
| # Stage 1: Skin AutoEncoder |
| python train.py --config configs/anigen_skin_ae.json --output_dir outputs/anigen_skin_ae |
| |
| # Stage 2: Sparse Structure DAE |
| python train.py --config configs/ss_dae.json --output_dir outputs/ss_dae |
| |
| # Stage 3: Structured Latent DAE |
| python train.py --config configs/slat_dae.json --output_dir outputs/slat_dae |
| |
| # Stage 4: SS Flow Matching (image-conditioned generation) |
| python train.py --config configs/ss_flow_duet.json --output_dir outputs/ss_flow_duet |
| |
| # Stage 5: SLAT Flow Matching (image-conditioned generation) |
| python train.py --config configs/slat_flow_auto.json --output_dir outputs/slat_flow_auto |
| ``` |
|
|
| ## Citation |
|
|
| ```bibtex |
| @article{huang2026anigen, |
| title = {AniGen: Unified $S^3$ Fields for Animatable 3D Asset Generation}, |
| author = {Huang, Yi-Hua and Zhou, Zi-Xin and He, Yuting and Chang, Chirui |
| and Pu, Cheng-Feng and Yang, Ziyi and Guo, Yuan-Chen |
| and Cao, Yan-Pei and Qi, Xiaojuan}, |
| journal = {ACM SIGGRAPH}, |
| year = {2026} |
| } |
| ``` |