Isabellaliu commited on
Commit
2598f46
·
verified ·
1 Parent(s): d7ea946

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +122 -0
README.md ADDED
@@ -0,0 +1,122 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # RigAnything: Template‑Free Autoregressive Rigging for Diverse 3D Assets (SIGGRAPH TOG 2025)
2
+
3
+ [![Paper](https://img.shields.io/badge/Paper-A42C25?style=for-the-badge&logo=arxiv&logoColor=white)](https://arxiv.org/abs/2502.09615)
4
+ [![Project Page](https://img.shields.io/badge/Project%20Page-000000?style=for-the-badge&logo=githubpages&logoColor=white)](https://www.liuisabella.com/RigAnything/)
5
+ [![Hugging Face Models](https://img.shields.io/badge/Models-fcd022?style=for-the-badge&logo=huggingface&logoColor=000)](https://huggingface.co/Isabellaliu/RigAnything/tree/main)
6
+
7
+ RigAnything predicts skeletons and skinning for diverse 3D assets without a fixed template. This repository provides inference scripts to rig your meshes (.glb or .obj) end‑to‑end and export a rigged GLB for use in DCC tools (e.g., Blender).
8
+
9
+ ## Environment setup
10
+
11
+ Recommended: create a fresh Conda env with Python 3.11.
12
+
13
+ ```bash
14
+ conda create -n riganything -y python=3.11
15
+ conda activate riganything
16
+ ```
17
+
18
+ Install PyTorch per your CUDA/CPU setup (see https://pytorch.org/get-started/locally/). Example (adjust CUDA version as needed):
19
+
20
+ ```bash
21
+ # GPU example (CUDA 12.x) — pick the right wheel from PyTorch website
22
+
23
+ # 1) Install PyTorch that matches your system
24
+ pip install torch torchvision --index-url https://download.pytorch.org/whl/cu126
25
+
26
+ # 2) Install project dependencies
27
+ pip install -r requirements.txt
28
+ ```
29
+
30
+ Notes
31
+ - The scripts import Blender’s Python API (`bpy`). The `bpy` PyPI package works in headless environments; alternatively, you may use a system Blender installation. If you run into OpenGL/GLX issues on a server, consider an off‑screen setup (e.g., OSMesa/Xvfb) and ensure libGL is available.
32
+ - `open3d`/`pymeshlab` may require system GL libraries on Linux (e.g., `libgl1`).
33
+
34
+ ## Checkpoint
35
+
36
+ Download the pre‑trained checkpoint and place it under `ckpt/`.
37
+
38
+ ```
39
+ hf download Isabellaliu/RigAnything --local-dir ckpt/
40
+ ```
41
+
42
+ ## Quick start
43
+
44
+ Use the provided script to simplify your mesh (optional) and run inference. The tool accepts either `.glb` or `.obj` as input.
45
+
46
+ ```bash
47
+ sh scripts/inference.sh <path_to_mesh.(glb|obj)> <simplify_flag: 0|1> <target_face_count>
48
+ ```
49
+
50
+ Example:
51
+
52
+ ```bash
53
+ sh scripts/inference.sh data_examples/spyro_the_dragon.glb 1 8192
54
+ ```
55
+
56
+ ### What the arguments mean
57
+ - mesh_path: path to your input mesh (.glb or .obj)
58
+ - simplify_flag: whether to simplify the mesh before rigging (0 = no, 1 = yes)
59
+ - target_face_count: the target number of faces after simplification (only used when simplify_flag = 1)
60
+
61
+ ### Outputs
62
+ Results are written under `outputs/<asset_name>/` with these key files:
63
+ - `<name>_simplified.glb`: the simplified input mesh used for inference
64
+ - `<name>_simplified.npz`: intermediate results (joints, weights, etc.)
65
+ - `<name>_simplified_rig.glb`: the final rigged mesh you can import into Blender
66
+ - `inference.log`: logs from all steps
67
+
68
+ ## Advanced: run inference directly
69
+
70
+ You can call the Python entry points used by the script. Minimal example equivalent to the shell script flow:
71
+
72
+ ```bash
73
+ # 1) Optional: simplify
74
+ python inference_utils/mesh_simplify.py \
75
+ --data_path data_examples/spyro_the_dragon.glb \
76
+ --mesh_simplify 1 \
77
+ --simplify_count 8192 \
78
+ --output_path outputs/spyro_the_dragon
79
+
80
+ # 2) Inference (uses config.yaml + checkpoint)
81
+ python inference.py \
82
+ --config config.yaml \
83
+ --load ckpt/riganything_ckpt.pt \
84
+ -s inference true \
85
+ -s inference_out_dir outputs/spyro_the_dragon \
86
+ --mesh_path outputs/spyro_the_dragon/spyro_the_dragon_simplified.glb
87
+
88
+ # 3) Visualize / export rigged GLB
89
+ python inference_utils/vis_skel.py \
90
+ --data_path outputs/spyro_the_dragon/spyro_the_dragon_simplified.npz \
91
+ --save_path outputs/spyro_the_dragon \
92
+ --mesh_path outputs/spyro_the_dragon/spyro_the_dragon_simplified.glb
93
+ ```
94
+
95
+ ## Supported inputs
96
+ - `.glb` is supported directly.
97
+ - `.obj` is supported and will be converted to `.glb` internally (without textures).
98
+
99
+ ## Tips & troubleshooting
100
+ - GPU memory: inference uses the first CUDA device (`cuda:0`). Ensure sufficient VRAM; otherwise consider simplifying the mesh (higher simplification ratio / lower face count).
101
+ - Headless servers: if `bpy` complains about display/GL, install the necessary GL libraries and/or use an off‑screen context. Using the `bpy` PyPI wheel typically helps for server environments.
102
+
103
+ ## Citation
104
+
105
+ If you find this work useful, please cite:
106
+
107
+ ```
108
+ @article{liu2025riganything,
109
+ title = {RigAnything: Template-free autoregressive rigging for diverse 3D assets},
110
+ author = {Liu, Isabella and Xu, Zhan and Wang, Yifan and Tan, Hao and Xu, Zexiang and Wang, Xiaolong and Su, Hao and Shi, Zifan},
111
+ journal = {ACM Transactions on Graphics (TOG)},
112
+ volume = {44},
113
+ number = {4},
114
+ pages = {1--12},
115
+ year = {2025},
116
+ publisher = {ACM}
117
+ }
118
+ ```
119
+
120
+ ---
121
+
122
+ Questions or issues? Please open a GitHub issue or reach out via the project page.