liamsch commited on
Commit
aa7adfd
·
1 Parent(s): 860f5c0

Add YAML frontmatter to README

Browse files
Files changed (1) hide show
  1. README.md +36 -115
README.md CHANGED
@@ -1,124 +1,45 @@
1
- <div align="center">
2
- <h1>🐑 SHeaP 🐑</h1>
3
- <h2>Self-Supervised Head Geometry Predictor Learned via 2D Gaussians</h2>
4
-
5
- <a href="https://nlml.github.io/sheap" target="_blank" rel="noopener noreferrer">
6
- <img src="https://img.shields.io/badge/Project_Page-green" alt="Project Page">
7
- </a>
8
- <a href="https://arxiv.org/abs/2504.12292"><img src="https://img.shields.io/badge/arXiv-2504.12292-b31b1b" alt="arXiv"></a>
9
- <a href="https://www.youtube.com/watch?v=vhXsZJWCBMA"><img src="https://img.shields.io/badge/YouTube-Video-red" alt="YouTube"></a>
 
 
 
 
 
 
10
 
11
  **Liam Schoneveld, Zhe Chen, Davide Davoli, Jiapeng Tang, Saimon Terazawa, Ko Nishino, Matthias Nießner**
12
 
13
- <img src="teaser.jpg" alt="SHeaP Teaser" width="100%">
14
-
15
- </div>
16
-
17
- ## Overview
18
-
19
- SHeaP learns to predict head geometry (FLAME parameters) from a single image, by predicting and rendering 2D Gaussians.
20
-
21
- This repository contains code and models for the **FLAME parameter inference only**.
22
-
23
- ## Example usage
24
-
25
- **After setting up**, for a simple example, run `python demo.py`.
26
-
27
- To run on a video you can use:
28
-
29
- ```bash
30
- python video_demo.py example_videos/dafoe.mp4
31
- ```
32
-
33
- The above command will produce the result in [example_videos/dafoe_rendered.mp4](https://github.com/nlml/SHeaP/blob/main/example_videos/dafoe_rendered.mp4).
34
-
35
- Or, here is a minimal example script:
36
-
37
- ```python
38
- import torch, torchvision.io as io
39
- from sheap import load_sheap_model
40
- # Available model variants:
41
- # sheap_model = load_sheap_model(model_type="paper")
42
- sheap_model = load_sheap_model(model_type="expressive")
43
- impath = "example_images/00000200.jpg"
44
- # Input should be a head crop similar to those in example_images/
45
- # shape (N,3,224,224) / pixel values from 0 to 1.
46
- image_tensor = io.decode_image(impath).float() / 255
47
- # flame_params_dict contains predicted FLAME parameters
48
- flame_params_dict = sheap_model(image_tensor[None])
49
- ```
50
-
51
- **Note: `model_type`** can be one of 2 values:
52
-
53
- - **`"paper"`**: used for paper results; gets best performance on NoW.
54
- - **`"expressive"`**: perhaps better for real-world use; it was trained for longer with less regularisation and tends to be more expressive.
55
-
56
- ## Setup
57
-
58
- ### Step 1: Install dependencies
59
-
60
- We just require `torch>=2.0.0` and a few other dependencies.
61
-
62
- Just install the latest `torch` in a new venv, then `pip install .`
63
-
64
- Or, if you use [`uv`](https://docs.astral.sh/uv/), you can just run `uv sync`.
65
-
66
- ### Step 2: Download and convert FLAME
67
-
68
- Only needed if you want to predict FLAME vertices or render a mesh.
69
-
70
- Download [FLAME2020](https://flame.is.tue.mpg.de/).
71
-
72
- Put it in the `FLAME2020/` dir. We only need generic_model.pkl. Your `FLAME2020/` directory should look like this:
73
-
74
- ```bash
75
- FLAME2020/
76
- ├── eyelids.pt
77
- ├── flame_landmark_idxs_barys.pt
78
- └── generic_model.pkl
79
- ```
80
-
81
- Now convert FLAME to our format:
82
-
83
- ```bash
84
- python convert_flame.py
85
- ```
86
-
87
- ## Reproduce paper results on NoW dataset
88
-
89
- To reproduce the validation results from the paper (median=0.93mm):
90
-
91
- First, update submodules:
92
-
93
- ```bash
94
- git submodule update --init --recursive
95
- ```
96
-
97
- Then build the NoW Evaluation docker image:
98
-
99
- ```bash
100
- docker build -t noweval now/now_evaluation
101
- ```
102
-
103
- Then predict FLAME meshes for all images in NoW using SHeaP:
104
 
105
- ```
106
- cd now/
107
- python now.py --now-dataset-root /path/to/NoW_Evaluation/dataset
108
- ```
109
 
110
- Upon finishing, the above command will print a command like the following:
111
 
112
- ```
113
- chmod 777 -R /home/user/sheap/now/now_eval_outputs/now_preds && docker run --ipc host --gpus all -it --rm -v /data/NoW_Evaluation/dataset:/dataset -v /home/user/sheap/now/now_eval_outputs/now_preds:/preds noweval
114
- ```
 
115
 
116
- Run that command. This will run NoW evaluation on the FLAME meshes we just predicted.
117
 
118
- Finally, the results will be placed in `/home/user/sheap/now/now_eval_outputs/now_preds` (or equivalent). The mean and median are already calculated:
119
 
120
- ```bash
121
- cat /home/user/sheap/now/now_eval_outputs/now_preds/results/RECON_computed_distances.npy.meanmedian
122
- 0.9327719333872148 # result in the paper
123
- 1.1568168246248534
124
- ```
 
 
 
 
 
 
1
+ ---
2
+ title: SHeaP - Self-Supervised Head Geometry Predictor
3
+ emoji: 🐑
4
+ colorFrom: blue
5
+ colorTo: green
6
+ sdk: gradio
7
+ sdk_version: 4.44.0
8
+ app_file: gradio_demo.py
9
+ pinned: false
10
+ license: mit
11
+ ---
12
+
13
+ # SHeaP: Self-Supervised Head Geometry Predictor Learned via 2D Gaussians
14
+
15
+ Upload an image or video to predict head geometry and render a 3D FLAME mesh overlay!
16
 
17
  **Liam Schoneveld, Zhe Chen, Davide Davoli, Jiapeng Tang, Saimon Terazawa, Ko Nishino, Matthias Nießner**
18
 
19
+ - [Project Page](https://nlml.github.io/sheap)
20
+ - [Paper](https://arxiv.org/abs/2504.12292)
21
+ - [GitHub Repository](https://github.com/nlml/sheap)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
22
 
23
+ ## About
 
 
 
24
 
25
+ SHeaP learns to predict head geometry (FLAME parameters) from a single image by predicting and rendering 2D Gaussians.
26
 
27
+ The output shows three views:
28
+ - **Left**: Original cropped face
29
+ - **Center**: Rendered FLAME mesh
30
+ - **Right**: Mesh overlaid on original
31
 
32
+ ## Setup Instructions
33
 
34
+ Before deploying to Hugging Face Spaces, you need to:
35
 
36
+ 1. Download the FLAME model from [FLAME 2020](https://flame.is.tue.mpg.de/)
37
+ 2. Convert it using `python convert_flame.py`
38
+ 3. Include the `FLAME2020/` directory with the required files:
39
+ - `generic_model.pt`
40
+ - `eyelids.pt`
41
+ - `flame_landmark_idxs_barys.pt`
42
+ 4. Include the `models/` directory with:
43
+ - `model_expressive.pt`
44
+ - `model_paper.pt`
45
+ - `model_lightweight.pt` (if available)