File size: 5,077 Bytes
30df7c0 2598f46 9985ec6 2598f46 30df7c0 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 | ---
language:
- en
tags:
- Animation
- Auto-rigging
- Graphics
---
# RigAnything: Template‑Free Autoregressive Rigging for Diverse 3D Assets (SIGGRAPH TOG 2025)
[](https://arxiv.org/abs/2502.09615)
[](https://www.liuisabella.com/RigAnything/)
[](https://github.com/Isabella98Liu/RigAnything)
[](https://huggingface.co/Isabellaliu/RigAnything/tree/main)
RigAnything predicts skeletons and skinning for diverse 3D assets without a fixed template. This repository provides inference scripts to rig your meshes (.glb or .obj) end‑to‑end and export a rigged GLB for use in DCC tools (e.g., Blender).
## Environment setup
Recommended: create a fresh Conda env with Python 3.11.
```bash
conda create -n riganything -y python=3.11
conda activate riganything
```
Install PyTorch per your CUDA/CPU setup (see https://pytorch.org/get-started/locally/). Example (adjust CUDA version as needed):
```bash
# GPU example (CUDA 12.x) — pick the right wheel from PyTorch website
# 1) Install PyTorch that matches your system
pip install torch torchvision --index-url https://download.pytorch.org/whl/cu126
# 2) Install project dependencies
pip install -r requirements.txt
```
Notes
- The scripts import Blender’s Python API (`bpy`). The `bpy` PyPI package works in headless environments; alternatively, you may use a system Blender installation. If you run into OpenGL/GLX issues on a server, consider an off‑screen setup (e.g., OSMesa/Xvfb) and ensure libGL is available.
- `open3d`/`pymeshlab` may require system GL libraries on Linux (e.g., `libgl1`).
## Checkpoint
Download the pre‑trained checkpoint and place it under `ckpt/`.
```
hf download Isabellaliu/RigAnything --local-dir ckpt/
```
## Quick start
Use the provided script to simplify your mesh (optional) and run inference. The tool accepts either `.glb` or `.obj` as input.
```bash
sh scripts/inference.sh <path_to_mesh.(glb|obj)> <simplify_flag: 0|1> <target_face_count>
```
Example:
```bash
sh scripts/inference.sh data_examples/spyro_the_dragon.glb 1 8192
```
### What the arguments mean
- mesh_path: path to your input mesh (.glb or .obj)
- simplify_flag: whether to simplify the mesh before rigging (0 = no, 1 = yes)
- target_face_count: the target number of faces after simplification (only used when simplify_flag = 1)
### Outputs
Results are written under `outputs/<asset_name>/` with these key files:
- `<name>_simplified.glb`: the simplified input mesh used for inference
- `<name>_simplified.npz`: intermediate results (joints, weights, etc.)
- `<name>_simplified_rig.glb`: the final rigged mesh you can import into Blender
- `inference.log`: logs from all steps
## Advanced: run inference directly
You can call the Python entry points used by the script. Minimal example equivalent to the shell script flow:
```bash
# 1) Optional: simplify
python inference_utils/mesh_simplify.py \
--data_path data_examples/spyro_the_dragon.glb \
--mesh_simplify 1 \
--simplify_count 8192 \
--output_path outputs/spyro_the_dragon
# 2) Inference (uses config.yaml + checkpoint)
python inference.py \
--config config.yaml \
--load ckpt/riganything_ckpt.pt \
-s inference true \
-s inference_out_dir outputs/spyro_the_dragon \
--mesh_path outputs/spyro_the_dragon/spyro_the_dragon_simplified.glb
# 3) Visualize / export rigged GLB
python inference_utils/vis_skel.py \
--data_path outputs/spyro_the_dragon/spyro_the_dragon_simplified.npz \
--save_path outputs/spyro_the_dragon \
--mesh_path outputs/spyro_the_dragon/spyro_the_dragon_simplified.glb
```
## Supported inputs
- `.glb` is supported directly.
- `.obj` is supported and will be converted to `.glb` internally (without textures).
## Tips & troubleshooting
- GPU memory: inference uses the first CUDA device (`cuda:0`). Ensure sufficient VRAM; otherwise consider simplifying the mesh (higher simplification ratio / lower face count).
- Headless servers: if `bpy` complains about display/GL, install the necessary GL libraries and/or use an off‑screen context. Using the `bpy` PyPI wheel typically helps for server environments.
## Citation
If you find this work useful, please cite:
```
@article{liu2025riganything,
title = {RigAnything: Template-free autoregressive rigging for diverse 3D assets},
author = {Liu, Isabella and Xu, Zhan and Wang, Yifan and Tan, Hao and Xu, Zexiang and Wang, Xiaolong and Su, Hao and Shi, Zifan},
journal = {ACM Transactions on Graphics (TOG)},
volume = {44},
number = {4},
pages = {1--12},
year = {2025},
publisher = {ACM}
}
```
---
Questions or issues? Please open a GitHub issue or reach out via the project page. |