liamsch commited on
Commit ·
aa7adfd
1
Parent(s): 860f5c0
Add YAML frontmatter to README
Browse files
README.md
CHANGED
|
@@ -1,124 +1,45 @@
|
|
| 1 |
-
|
| 2 |
-
|
| 3 |
-
|
| 4 |
-
|
| 5 |
-
|
| 6 |
-
|
| 7 |
-
|
| 8 |
-
|
| 9 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 10 |
|
| 11 |
**Liam Schoneveld, Zhe Chen, Davide Davoli, Jiapeng Tang, Saimon Terazawa, Ko Nishino, Matthias Nießner**
|
| 12 |
|
| 13 |
-
|
| 14 |
-
|
| 15 |
-
|
| 16 |
-
|
| 17 |
-
## Overview
|
| 18 |
-
|
| 19 |
-
SHeaP learns to predict head geometry (FLAME parameters) from a single image, by predicting and rendering 2D Gaussians.
|
| 20 |
-
|
| 21 |
-
This repository contains code and models for the **FLAME parameter inference only**.
|
| 22 |
-
|
| 23 |
-
## Example usage
|
| 24 |
-
|
| 25 |
-
**After setting up**, for a simple example, run `python demo.py`.
|
| 26 |
-
|
| 27 |
-
To run on a video you can use:
|
| 28 |
-
|
| 29 |
-
```bash
|
| 30 |
-
python video_demo.py example_videos/dafoe.mp4
|
| 31 |
-
```
|
| 32 |
-
|
| 33 |
-
The above command will produce the result in [example_videos/dafoe_rendered.mp4](https://github.com/nlml/SHeaP/blob/main/example_videos/dafoe_rendered.mp4).
|
| 34 |
-
|
| 35 |
-
Or, here is a minimal example script:
|
| 36 |
-
|
| 37 |
-
```python
|
| 38 |
-
import torch, torchvision.io as io
|
| 39 |
-
from sheap import load_sheap_model
|
| 40 |
-
# Available model variants:
|
| 41 |
-
# sheap_model = load_sheap_model(model_type="paper")
|
| 42 |
-
sheap_model = load_sheap_model(model_type="expressive")
|
| 43 |
-
impath = "example_images/00000200.jpg"
|
| 44 |
-
# Input should be a head crop similar to those in example_images/
|
| 45 |
-
# shape (N,3,224,224) / pixel values from 0 to 1.
|
| 46 |
-
image_tensor = io.decode_image(impath).float() / 255
|
| 47 |
-
# flame_params_dict contains predicted FLAME parameters
|
| 48 |
-
flame_params_dict = sheap_model(image_tensor[None])
|
| 49 |
-
```
|
| 50 |
-
|
| 51 |
-
**Note: `model_type`** can be one of 2 values:
|
| 52 |
-
|
| 53 |
-
- **`"paper"`**: used for paper results; gets best performance on NoW.
|
| 54 |
-
- **`"expressive"`**: perhaps better for real-world use; it was trained for longer with less regularisation and tends to be more expressive.
|
| 55 |
-
|
| 56 |
-
## Setup
|
| 57 |
-
|
| 58 |
-
### Step 1: Install dependencies
|
| 59 |
-
|
| 60 |
-
We just require `torch>=2.0.0` and a few other dependencies.
|
| 61 |
-
|
| 62 |
-
Just install the latest `torch` in a new venv, then `pip install .`
|
| 63 |
-
|
| 64 |
-
Or, if you use [`uv`](https://docs.astral.sh/uv/), you can just run `uv sync`.
|
| 65 |
-
|
| 66 |
-
### Step 2: Download and convert FLAME
|
| 67 |
-
|
| 68 |
-
Only needed if you want to predict FLAME vertices or render a mesh.
|
| 69 |
-
|
| 70 |
-
Download [FLAME2020](https://flame.is.tue.mpg.de/).
|
| 71 |
-
|
| 72 |
-
Put it in the `FLAME2020/` dir. We only need generic_model.pkl. Your `FLAME2020/` directory should look like this:
|
| 73 |
-
|
| 74 |
-
```bash
|
| 75 |
-
FLAME2020/
|
| 76 |
-
├── eyelids.pt
|
| 77 |
-
├── flame_landmark_idxs_barys.pt
|
| 78 |
-
└── generic_model.pkl
|
| 79 |
-
```
|
| 80 |
-
|
| 81 |
-
Now convert FLAME to our format:
|
| 82 |
-
|
| 83 |
-
```bash
|
| 84 |
-
python convert_flame.py
|
| 85 |
-
```
|
| 86 |
-
|
| 87 |
-
## Reproduce paper results on NoW dataset
|
| 88 |
-
|
| 89 |
-
To reproduce the validation results from the paper (median=0.93mm):
|
| 90 |
-
|
| 91 |
-
First, update submodules:
|
| 92 |
-
|
| 93 |
-
```bash
|
| 94 |
-
git submodule update --init --recursive
|
| 95 |
-
```
|
| 96 |
-
|
| 97 |
-
Then build the NoW Evaluation docker image:
|
| 98 |
-
|
| 99 |
-
```bash
|
| 100 |
-
docker build -t noweval now/now_evaluation
|
| 101 |
-
```
|
| 102 |
-
|
| 103 |
-
Then predict FLAME meshes for all images in NoW using SHeaP:
|
| 104 |
|
| 105 |
-
|
| 106 |
-
cd now/
|
| 107 |
-
python now.py --now-dataset-root /path/to/NoW_Evaluation/dataset
|
| 108 |
-
```
|
| 109 |
|
| 110 |
-
|
| 111 |
|
| 112 |
-
|
| 113 |
-
|
| 114 |
-
|
|
|
|
| 115 |
|
| 116 |
-
|
| 117 |
|
| 118 |
-
|
| 119 |
|
| 120 |
-
|
| 121 |
-
|
| 122 |
-
|
| 123 |
-
|
| 124 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
title: SHeaP - Self-Supervised Head Geometry Predictor
|
| 3 |
+
emoji: 🐑
|
| 4 |
+
colorFrom: blue
|
| 5 |
+
colorTo: green
|
| 6 |
+
sdk: gradio
|
| 7 |
+
sdk_version: 4.44.0
|
| 8 |
+
app_file: gradio_demo.py
|
| 9 |
+
pinned: false
|
| 10 |
+
license: mit
|
| 11 |
+
---
|
| 12 |
+
|
| 13 |
+
# SHeaP: Self-Supervised Head Geometry Predictor Learned via 2D Gaussians
|
| 14 |
+
|
| 15 |
+
Upload an image or video to predict head geometry and render a 3D FLAME mesh overlay!
|
| 16 |
|
| 17 |
**Liam Schoneveld, Zhe Chen, Davide Davoli, Jiapeng Tang, Saimon Terazawa, Ko Nishino, Matthias Nießner**
|
| 18 |
|
| 19 |
+
- [Project Page](https://nlml.github.io/sheap)
|
| 20 |
+
- [Paper](https://arxiv.org/abs/2504.12292)
|
| 21 |
+
- [GitHub Repository](https://github.com/nlml/sheap)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 22 |
|
| 23 |
+
## About
|
|
|
|
|
|
|
|
|
|
| 24 |
|
| 25 |
+
SHeaP learns to predict head geometry (FLAME parameters) from a single image by predicting and rendering 2D Gaussians.
|
| 26 |
|
| 27 |
+
The output shows three views:
|
| 28 |
+
- **Left**: Original cropped face
|
| 29 |
+
- **Center**: Rendered FLAME mesh
|
| 30 |
+
- **Right**: Mesh overlaid on original
|
| 31 |
|
| 32 |
+
## Setup Instructions
|
| 33 |
|
| 34 |
+
Before deploying to Hugging Face Spaces, you need to:
|
| 35 |
|
| 36 |
+
1. Download the FLAME model from [FLAME 2020](https://flame.is.tue.mpg.de/)
|
| 37 |
+
2. Convert it using `python convert_flame.py`
|
| 38 |
+
3. Include the `FLAME2020/` directory with the required files:
|
| 39 |
+
- `generic_model.pt`
|
| 40 |
+
- `eyelids.pt`
|
| 41 |
+
- `flame_landmark_idxs_barys.pt`
|
| 42 |
+
4. Include the `models/` directory with:
|
| 43 |
+
- `model_expressive.pt`
|
| 44 |
+
- `model_paper.pt`
|
| 45 |
+
- `model_lightweight.pt` (if available)
|