--- license: mit size_categories: - n<1K pretty_name: AniGen Sample Data tags: - 3d - image task_categories: - image-to-3d configs: - config_name: default default: true data_files: - split: train path: samples.csv --- # AniGen Sample Data [Paper](https://huggingface.co/papers/2604.08746) | [Project Page](https://yihua7.github.io/AniGen_web/) | [GitHub](https://github.com/VAST-AI-Research/AniGen) This directory is a compact example subset of the AniGen training dataset, as presented in the paper [AniGen: Unified $S^3$ Fields for Animatable 3D Asset Generation](https://huggingface.co/papers/2604.08746). AniGen is a unified framework that directly generates animate-ready 3D assets conditioned on a single image by representing shape, skeleton, and skinning as mutually consistent $S^3$ Fields. ## What Is Included - 10 examples - 10 unique raw assets - Full cross-modal files for each example - A subset `metadata.csv` with 10 rows The retained directory layout follows the core structure of the reference test set: ```text raw/ renders/ renders_cond/ skeleton/ voxels/ features/ metadata.csv statistics.txt latents/ (encoded by the trained slat auto-encoder) ss_latents/ (encoded by the trained ss auto-encoder) ``` ## How To Read One Example Each row in `metadata.csv` corresponds to one example identifier in the `sha256` column. In practice this value is the sample key used across modalities. For a row with sample key ``: - raw asset: `local_path` field, for example `raw/` - rendered views: `renders//` - conditional rendered views: `renders_cond//` - skeleton files: `skeleton//` - voxel files: `voxels/.ply` and `voxels/_skeleton.ply` - image feature: `features/dinov2_vitl14_reg/.npz` - mesh latents: files under `latents/*/.npz` - structure latents: files under `ss_latents/*/.npz` ## Sample Usage (Training) According to the [official repository](https://github.com/VAST-AI-Research/AniGen), you can use this data for training by following these stages: ```bash # Stage 1: Skin AutoEncoder python train.py --config configs/anigen_skin_ae.json --output_dir outputs/anigen_skin_ae # Stage 2: Sparse Structure DAE python train.py --config configs/ss_dae.json --output_dir outputs/ss_dae # Stage 3: Structured Latent DAE python train.py --config configs/slat_dae.json --output_dir outputs/slat_dae # Stage 4: SS Flow Matching (image-conditioned generation) python train.py --config configs/ss_flow_duet.json --output_dir outputs/ss_flow_duet # Stage 5: SLAT Flow Matching (image-conditioned generation) python train.py --config configs/slat_flow_auto.json --output_dir outputs/slat_flow_auto ``` ## Citation ```bibtex @article{huang2026anigen, title = {AniGen: Unified $S^3$ Fields for Animatable 3D Asset Generation}, author = {Huang, Yi-Hua and Zhou, Zi-Xin and He, Yuting and Chang, Chirui and Pu, Cheng-Feng and Yang, Ziyi and Guo, Yuan-Chen and Cao, Yan-Pei and Qi, Xiaojuan}, journal = {ACM SIGGRAPH}, year = {2026} } ```