File size: 3,190 Bytes
e8ad724 c036393 39334eb d09e1ba 39334eb d09e1ba 39334eb d09e1ba e8ad724 c036393 d09e1ba c036393 d09e1ba | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 | ---
license: mit
size_categories:
- n<1K
pretty_name: AniGen Sample Data
tags:
- 3d
- image
task_categories:
- image-to-3d
configs:
- config_name: default
default: true
data_files:
- split: train
path: samples.csv
---
# AniGen Sample Data
[Paper](https://huggingface.co/papers/2604.08746) | [Project Page](https://yihua7.github.io/AniGen_web/) | [GitHub](https://github.com/VAST-AI-Research/AniGen)
This directory is a compact example subset of the AniGen training dataset, as presented in the paper [AniGen: Unified $S^3$ Fields for Animatable 3D Asset Generation](https://huggingface.co/papers/2604.08746).
AniGen is a unified framework that directly generates animate-ready 3D assets conditioned on a single image by representing shape, skeleton, and skinning as mutually consistent $S^3$ Fields.
## What Is Included
- 10 examples
- 10 unique raw assets
- Full cross-modal files for each example
- A subset `metadata.csv` with 10 rows
The retained directory layout follows the core structure of the reference test set:
```text
raw/
renders/
renders_cond/
skeleton/
voxels/
features/
metadata.csv
statistics.txt
latents/ (encoded by the trained slat auto-encoder)
ss_latents/ (encoded by the trained ss auto-encoder)
```
## How To Read One Example
Each row in `metadata.csv` corresponds to one example identifier in the `sha256` column. In practice this value is the sample key used across modalities.
For a row with sample key `<file_identifier>`:
- raw asset: `local_path` field, for example `raw/<raw_file>`
- rendered views: `renders/<file_identifier>/`
- conditional rendered views: `renders_cond/<file_identifier>/`
- skeleton files: `skeleton/<file_identifier>/`
- voxel files: `voxels/<file_identifier>.ply` and `voxels/<file_identifier>_skeleton.ply`
- image feature: `features/dinov2_vitl14_reg/<file_identifier>.npz`
- mesh latents: files under `latents/*/<file_identifier>.npz`
- structure latents: files under `ss_latents/*/<file_identifier>.npz`
## Sample Usage (Training)
According to the [official repository](https://github.com/VAST-AI-Research/AniGen), you can use this data for training by following these stages:
```bash
# Stage 1: Skin AutoEncoder
python train.py --config configs/anigen_skin_ae.json --output_dir outputs/anigen_skin_ae
# Stage 2: Sparse Structure DAE
python train.py --config configs/ss_dae.json --output_dir outputs/ss_dae
# Stage 3: Structured Latent DAE
python train.py --config configs/slat_dae.json --output_dir outputs/slat_dae
# Stage 4: SS Flow Matching (image-conditioned generation)
python train.py --config configs/ss_flow_duet.json --output_dir outputs/ss_flow_duet
# Stage 5: SLAT Flow Matching (image-conditioned generation)
python train.py --config configs/slat_flow_auto.json --output_dir outputs/slat_flow_auto
```
## Citation
```bibtex
@article{huang2026anigen,
title = {AniGen: Unified $S^3$ Fields for Animatable 3D Asset Generation},
author = {Huang, Yi-Hua and Zhou, Zi-Xin and He, Yuting and Chang, Chirui
and Pu, Cheng-Feng and Yang, Ziyi and Guo, Yuan-Chen
and Cao, Yan-Pei and Qi, Xiaojuan},
journal = {ACM SIGGRAPH},
year = {2026}
}
``` |