File size: 3,599 Bytes
c3891f1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1dde65b
c3891f1
 
 
 
 
 
1dde65b
c3891f1
1dde65b
c3891f1
1dde65b
 
 
 
 
c3891f1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
---
license: apache-2.0
tags:
  - RyzenAI
  - Int8 quantization
  - Face Restoration
  - PSFRGAN
  - ONNX
  - Computer Vision
metrics:
  - PSNR
  - MS_SSIM
  - FID
---

# PSFRGAN for face restoration

The model operates at 512x512 resolution and is particularly effective at restoring faces with various degradations including blur, noise, and low resolution.

It was introduced in the paper _Progressive Semantic-Aware Style Transformation for Blind Face Restoration_ by Chaofeng Chen et al. at CVPR 2021.

We have developed a modified version optimized for [AMD Ryzen AI](https://onnxruntime.ai/docs/execution-providers/Vitis-AI-ExecutionProvider.html).

## Model description

PSFRGAN (Progressive Semantic-aware Face Restoration Generative Adversarial Network) is a deep learning model designed for blind face restoration, capable of recovering high-quality face images from severely degraded inputs.

## Intended uses & limitations

You can use this model for face restoration tasks. See the [model hub](https://huggingface.co/models?search=amd/ryzenai-psfrgan) for all available psfrgan models.

## How to use

### Installation

```bash
# inference only
pip install -r requirements-infer.txt
# inference & evaluation
pip install -r requirements-eval.txt
```

### Data Preparation (optional: for evaluation)

1. Download `CelebA-Test (LQ)` and `CelebA-Test (HQ)` from [GFP-GAN homepage](https://xinntao.github.io/projects/gfpgan)
2. Organize the dataset directory as follows:

```Plain
└── datasets
     └── celeba_512_validation
           β”œβ”€β”€ 00000000.png
           β”œβ”€β”€ ...
     β”œβ”€β”€ celeba_512_validation_lq
           β”œβ”€β”€ 00000000.png
           β”œβ”€β”€ ...

```

### Test & Evaluation

- **Run inference on images**

```bash
python onnx_inference.py --onnx psfrgan_nchw_fp32.onnx --latent latent.npy --input /Path/To/Image --out-dir outputs
python onnx_inference.py --onnx psfrgan_nhwc_int8.onnx --latent latent.npy --input /Path/To/Image --out-dir outputs
```

_Arguments:_

`--input`: Accepts either a single image file path or a directory path. If it's a file, the script will process that image only. If it's a directory, the script will recursively scan for .png, .jpg, and .jpeg files and process all of them.

`--latent`: (Optional) Path to the latent code file (.npy). If not provided, random latent values will be generated with a fixed seed for reproducibility.

`--out-dir`: Output directory where the restored images will be saved.

- **Evaluate the quantized model**

```bash
# eval fp32
python onnx_eval.py \
      --onnx psfrgan_nchw_fp32.onnx \
      --latent latent.npy \
      --hq-dir datasets/celeba_512_validation \
      --lq-dir datasets/celeba_512_validation_lq \
      --out-dir outputs/fp32 -clean

# eval int8
python onnx_eval.py \
      --onnx psfrgan_nhwc_int8.onnx \
      --latent latent.npy \
      --hq-dir datasets/celeba_512_validation \
      --lq-dir datasets/celeba_512_validation_lq \
      --out-dir outputs/int8 -clean
```

### Performance

| Model          | PSNR(↑) | MS_SSIM(↑) | FID(↓) |
| -------------- | ------- | ---------- | ------ |
| PSFRGAN (fp32) | 25.27   | 0.8500     | 21.99  |
| PSFRGAN (int8) | 25.27   | 0.8487     | 24.34  |

---

```bibtex
@inproceedings{ChenPSFRGAN,
    author = {Chen, Chaofeng and Li, Xiaoming and Lingbo, Yang and Lin, Xianhui and Zhang, Lei and Wong, Kwan-Yee~K.},
    title = {Progressive Semantic-Aware Style Transformation for Blind Face Restoration},
    Journal = {IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
    year = {2021}
}
```