ismot tomas-gajarsky commited on
Commit
5278a41
·
0 Parent(s):

Duplicate from tomas-gajarsky/facetorch-app

Browse files

Co-authored-by: Tomas Gajarsky <tomas-gajarsky@users.noreply.huggingface.co>

Files changed (16) hide show
  1. .gitattributes +33 -0
  2. Dockerfile +21 -0
  3. Dockerfile.gpu +41 -0
  4. README.md +42 -0
  5. app.py +82 -0
  6. config.merged.gpu.yml +349 -0
  7. config.merged.yml +349 -0
  8. requirements.txt +2 -0
  9. test.jpg +0 -0
  10. test10.jpg +0 -0
  11. test2.jpg +0 -0
  12. test3.jpg +0 -0
  13. test4.jpg +0 -0
  14. test5.jpg +0 -0
  15. test6.jpg +0 -0
  16. test8.jpg +0 -0
.gitattributes ADDED
@@ -0,0 +1,33 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ftz filter=lfs diff=lfs merge=lfs -text
6
+ *.gz filter=lfs diff=lfs merge=lfs -text
7
+ *.h5 filter=lfs diff=lfs merge=lfs -text
8
+ *.joblib filter=lfs diff=lfs merge=lfs -text
9
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
10
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.npy filter=lfs diff=lfs merge=lfs -text
14
+ *.npz filter=lfs diff=lfs merge=lfs -text
15
+ *.onnx filter=lfs diff=lfs merge=lfs -text
16
+ *.ot filter=lfs diff=lfs merge=lfs -text
17
+ *.parquet filter=lfs diff=lfs merge=lfs -text
18
+ *.pb filter=lfs diff=lfs merge=lfs -text
19
+ *.pickle filter=lfs diff=lfs merge=lfs -text
20
+ *.pkl filter=lfs diff=lfs merge=lfs -text
21
+ *.pt filter=lfs diff=lfs merge=lfs -text
22
+ *.pth filter=lfs diff=lfs merge=lfs -text
23
+ *.rar filter=lfs diff=lfs merge=lfs -text
24
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
25
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
26
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
27
+ *.tflite filter=lfs diff=lfs merge=lfs -text
28
+ *.tgz filter=lfs diff=lfs merge=lfs -text
29
+ *.wasm filter=lfs diff=lfs merge=lfs -text
30
+ *.xz filter=lfs diff=lfs merge=lfs -text
31
+ *.zip filter=lfs diff=lfs merge=lfs -text
32
+ *.zst filter=lfs diff=lfs merge=lfs -text
33
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
Dockerfile ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ FROM python:3.9.12-slim
2
+
3
+ RUN useradd -ms /bin/bash admin
4
+
5
+ ENV WORKDIR=/code
6
+ WORKDIR $WORKDIR
7
+ RUN chown -R admin:admin $WORKDIR
8
+ RUN chmod 755 $WORKDIR
9
+
10
+ COPY requirements.txt $WORKDIR/requirements.txt
11
+
12
+ RUN pip install gradio --no-cache-dir
13
+ RUN pip install --no-cache-dir --upgrade -r $WORKDIR/requirements.txt
14
+
15
+ COPY . .
16
+
17
+ USER admin
18
+
19
+ EXPOSE 7860
20
+
21
+ ENTRYPOINT ["python", "app.py"]
Dockerfile.gpu ADDED
@@ -0,0 +1,41 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ FROM nvidia/cuda:11.7.0-runtime-ubuntu20.04
2
+
3
+ # Set working directory
4
+ ENV WORKDIR=/code
5
+ WORKDIR $WORKDIR
6
+
7
+ RUN useradd -ms /bin/bash admin
8
+ RUN chown -R admin:admin $WORKDIR
9
+ RUN chmod 755 $WORKDIR
10
+
11
+
12
+ # Install base utilities
13
+ RUN apt-get update && apt-get install -y \
14
+ software-properties-common && \
15
+ apt-get clean && rm -rf /var/lib/apt/lists/*
16
+
17
+ # Install Python 3.9 from ppa
18
+ RUN add-apt-repository ppa:deadsnakes/ppa
19
+ RUN apt install -y \
20
+ python3.9 \
21
+ python3-pip
22
+
23
+ # Link python3.9 to python3 and python
24
+ RUN ln -sf /usr/bin/python3.9 /usr/bin/python3 & \
25
+ ln -sf /usr/bin/python3 /usr/bin/python & \
26
+ ln -sf /usr/bin/pip3 /usr/bin/pip
27
+ RUN pip install --upgrade pip
28
+
29
+
30
+ COPY requirements.txt $WORKDIR/requirements.txt
31
+
32
+ RUN pip install gradio --no-cache-dir
33
+ RUN pip install --no-cache-dir --upgrade -r $WORKDIR/requirements.txt
34
+
35
+ COPY . .
36
+
37
+ USER admin
38
+
39
+ EXPOSE 7860
40
+
41
+ ENTRYPOINT ["python", "app.py", "--path-conf", "config.merged.gpu.yml"]
README.md ADDED
@@ -0,0 +1,42 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: Face Analysis (facetorch)
3
+ emoji: 🥸
4
+ colorFrom: red
5
+ colorTo: black
6
+ sdk: docker
7
+ app_port: 7860
8
+ pinned: false
9
+ license: apache-2.0
10
+ task_categories:
11
+ - face-detection
12
+ - face-representation
13
+ - face-verification
14
+ - facial-expression-recognition
15
+ - deepfake-detection
16
+ - face-alignment
17
+ - 3D-face-alignment
18
+ duplicated_from: tomas-gajarsky/facetorch-app
19
+ ---
20
+
21
+
22
+ # ![](https://raw.githubusercontent.com/tomas-gajarsky/facetorch/main/data/facetorch-logo-42.png "facetorch logo") facetorch
23
+ ![build](https://github.com/tomas-gajarsky/facetorch/actions/workflows/build.yml/badge.svg?branch=main)
24
+ ![lint](https://github.com/tomas-gajarsky/facetorch/actions/workflows/lint.yml/badge.svg?branch=main)
25
+ [![PyPI](https://img.shields.io/pypi/v/facetorch)](https://pypi.org/project/facetorch/)
26
+ [![Conda (channel only)](https://img.shields.io/conda/vn/conda-forge/facetorch)](https://anaconda.org/conda-forge/facetorch)
27
+ [![PyPI - License](https://img.shields.io/pypi/l/facetorch)](https://raw.githubusercontent.com/tomas-gajarsky/facetorch/main/LICENSE)
28
+ <a href="https://github.com/psf/black"><img alt="Code style: black" src="https://img.shields.io/badge/code%20style-black-000000.svg"></a>
29
+
30
+
31
+ [Documentation](https://tomas-gajarsky.github.io/facetorch/facetorch/index.html), [Docker Hub](https://hub.docker.com/repository/docker/tomasgajarsky/facetorch) [(GPU)](https://hub.docker.com/repository/docker/tomasgajarsky/facetorch-gpu)
32
+
33
+ Facetorch is a Python library that can detect faces and analyze facial features using deep neural networks. The goal is to gather open-sourced face analysis models from the community, optimize them for performance using TorchScript and combine them to create a face analysis tool that one can:
34
+
35
+ 1. configure using [Hydra](https://hydra.cc/docs/intro/) (OmegaConf)
36
+ 2. reproduce with [conda-lock](https://github.com/conda-incubator/conda-lock) and [Docker](https://docs.docker.com/get-docker/)
37
+ 3. accelerate on CPU and GPU with [TorchScript](https://pytorch.org/docs/stable/jit.html)
38
+ 4. extend by uploading a model file to Google Drive and adding a config YAML file to the repository
39
+
40
+ Please, use the library responsibly with caution and follow the
41
+ [ethics guidelines for Trustworthy AI from European Commission](https://ec.europa.eu/futurium/en/ai-alliance-consultation.1.html).
42
+ The models are not perfect and may be biased.
app.py ADDED
@@ -0,0 +1,82 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import json
3
+ import argparse
4
+ import operator
5
+ import gradio as gr
6
+ import torchvision
7
+ from typing import Tuple, Dict
8
+ from facetorch import FaceAnalyzer
9
+ from facetorch.datastruct import ImageData
10
+ from omegaconf import OmegaConf
11
+ from torch.nn.functional import cosine_similarity
12
+
13
+ parser = argparse.ArgumentParser(description="App")
14
+ parser.add_argument(
15
+ "--path-conf",
16
+ type=str,
17
+ default="config.merged.yml",
18
+ help="Path to the config file",
19
+ )
20
+
21
+ args = parser.parse_args()
22
+
23
+ cfg = OmegaConf.load(args.path_conf)
24
+ analyzer = FaceAnalyzer(cfg.analyzer)
25
+
26
+ def gen_sim_dict_str(response: ImageData, pred_name: str = "verify", index: int = 0)-> str:
27
+ if len(response.faces) > 0:
28
+ base_emb = response.faces[index].preds[pred_name].logits
29
+ sim_dict = {face.indx: cosine_similarity(base_emb, face.preds[pred_name].logits, dim=0).item() for face in response.faces}
30
+ sim_dict_sort = dict(sorted(sim_dict.items(), key=operator.itemgetter(1),reverse=True))
31
+ sim_dict_sort_str = str(sim_dict_sort)
32
+ else:
33
+ sim_dict_sort_str = ""
34
+
35
+ return sim_dict_sort_str
36
+
37
+
38
+ def inference(path_image: str) -> Tuple:
39
+ response = analyzer.run(
40
+ path_image=path_image,
41
+ batch_size=cfg.batch_size,
42
+ fix_img_size=cfg.fix_img_size,
43
+ return_img_data=cfg.return_img_data,
44
+ include_tensors=cfg.include_tensors,
45
+ path_output=None,
46
+ )
47
+
48
+ pil_image = torchvision.transforms.functional.to_pil_image(response.img)
49
+
50
+ fer_dict_str = str({face.indx: face.preds["fer"].label for face in response.faces})
51
+ deepfake_dict_str = str({face.indx: face.preds["deepfake"].label for face in response.faces})
52
+ response_str = str(response)
53
+
54
+ sim_dict_str_embed = gen_sim_dict_str(response, pred_name="embed", index=0)
55
+ sim_dict_str_verify = gen_sim_dict_str(response, pred_name="verify", index=0)
56
+
57
+ os.remove(path_image)
58
+
59
+ out_tuple = (pil_image, fer_dict_str, deepfake_dict_str, sim_dict_str_embed, sim_dict_str_verify, response_str)
60
+ return out_tuple
61
+
62
+
63
+ title = "Face Analysis"
64
+ description = "Demo of facetorch, a face analysis Python library that implements open-source pre-trained neural networks for face detection, representation learning, verification, expression recognition, deepfake detection, and 3D alignment. Try selecting one of the example images or upload your own. This work would not be possible without the researchers and engineers who trained the models (sources and credits can be found in the facetorch repository)."
65
+ article = "<p style='text-align: center'><a href='https://github.com/tomas-gajarsky/facetorch' style='text-align:center' target='_blank'>facetorch GitHub repository</a></p>"
66
+
67
+ demo=gr.Interface(
68
+ inference,
69
+ [gr.Image(label="Input", type="filepath")],
70
+ [gr.Image(type="pil", label="Face Detection and 3D Landmarks"),
71
+ gr.Textbox(label="Facial Expression Recognition"),
72
+ gr.Textbox(label="DeepFake Detection"),
73
+ gr.Textbox(label="Cosine similarity of Face Representation Embeddings"),
74
+ gr.Textbox(label="Cosine similarity of Face Verification Embeddings"),
75
+ gr.Textbox(label="Response")],
76
+ title=title,
77
+ description=description,
78
+ article=article,
79
+ examples=[["./test5.jpg"], ["./test.jpg"], ["./test4.jpg"], ["./test8.jpg"], ["./test6.jpg"], ["./test3.jpg"], ["./test10.jpg"]],
80
+ )
81
+ demo.queue(concurrency_count=1, api_open=False)
82
+ demo.launch(server_name="0.0.0.0", server_port=7860, debug=True)
config.merged.gpu.yml ADDED
@@ -0,0 +1,349 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ analyzer:
2
+ device: cuda
3
+ optimize_transforms: true
4
+ reader:
5
+ _target_: facetorch.analyzer.reader.ImageReader
6
+ device:
7
+ _target_: torch.device
8
+ type: ${analyzer.device}
9
+ optimize_transform: ${analyzer.optimize_transforms}
10
+ transform:
11
+ _target_: torchvision.transforms.Compose
12
+ transforms:
13
+ - _target_: facetorch.transforms.SquarePad
14
+ - _target_: torchvision.transforms.Resize
15
+ size:
16
+ - 1080
17
+ detector:
18
+ _target_: facetorch.analyzer.detector.FaceDetector
19
+ downloader:
20
+ _target_: facetorch.downloader.DownloaderGDrive
21
+ file_id: 154x2VjmTQVqmowB0yZw4Uck7uQs2vVBs
22
+ path_local: /code/models/torchscript/detector/1/model.pt
23
+ device:
24
+ _target_: torch.device
25
+ type: ${analyzer.device}
26
+ reverse_colors: true
27
+ preprocessor:
28
+ _target_: facetorch.analyzer.detector.pre.DetectorPreProcessor
29
+ transform:
30
+ _target_: torchvision.transforms.Compose
31
+ transforms:
32
+ - _target_: torchvision.transforms.Normalize
33
+ mean:
34
+ - 104.0
35
+ - 117.0
36
+ - 123.0
37
+ std:
38
+ - 1.0
39
+ - 1.0
40
+ - 1.0
41
+ device:
42
+ _target_: torch.device
43
+ type: ${analyzer.device}
44
+ optimize_transform: ${analyzer.optimize_transforms}
45
+ reverse_colors: ${analyzer.detector.reverse_colors}
46
+ postprocessor:
47
+ _target_: facetorch.analyzer.detector.post.PostRetFace
48
+ transform: None
49
+ device:
50
+ _target_: torch.device
51
+ type: ${analyzer.device}
52
+ optimize_transform: ${analyzer.optimize_transforms}
53
+ confidence_threshold: 0.02
54
+ top_k: 5000
55
+ nms_threshold: 0.4
56
+ keep_top_k: 750
57
+ score_threshold: 0.6
58
+ prior_box:
59
+ _target_: facetorch.analyzer.detector.post.PriorBox
60
+ min_sizes:
61
+ - - 16
62
+ - 32
63
+ - - 64
64
+ - 128
65
+ - - 256
66
+ - 512
67
+ steps:
68
+ - 8
69
+ - 16
70
+ - 32
71
+ clip: false
72
+ variance:
73
+ - 0.1
74
+ - 0.2
75
+ reverse_colors: ${analyzer.detector.reverse_colors}
76
+ expand_box_ratio: 0.1
77
+ unifier:
78
+ _target_: facetorch.analyzer.unifier.FaceUnifier
79
+ transform:
80
+ _target_: torchvision.transforms.Compose
81
+ transforms:
82
+ - _target_: torchvision.transforms.Normalize
83
+ mean:
84
+ - -123.0
85
+ - -117.0
86
+ - -104.0
87
+ std:
88
+ - 255.0
89
+ - 255.0
90
+ - 255.0
91
+ - _target_: torchvision.transforms.Resize
92
+ size:
93
+ - 380
94
+ - 380
95
+ device:
96
+ _target_: torch.device
97
+ type: ${analyzer.device}
98
+ optimize_transform: ${analyzer.optimize_transforms}
99
+ predictor:
100
+ embed:
101
+ _target_: facetorch.analyzer.predictor.FacePredictor
102
+ downloader:
103
+ _target_: facetorch.downloader.DownloaderGDrive
104
+ file_id: 19h3kqar1wlELAmM5hDyj9tlrUh8yjrCl
105
+ path_local: /code/models/torchscript/predictor/embed/1/model.pt
106
+ device:
107
+ _target_: torch.device
108
+ type: ${analyzer.device}
109
+ preprocessor:
110
+ _target_: facetorch.analyzer.predictor.pre.PredictorPreProcessor
111
+ transform:
112
+ _target_: torchvision.transforms.Compose
113
+ transforms:
114
+ - _target_: torchvision.transforms.Resize
115
+ size:
116
+ - 244
117
+ - 244
118
+ - _target_: torchvision.transforms.Normalize
119
+ mean:
120
+ - 0.485
121
+ - 0.456
122
+ - 0.406
123
+ std:
124
+ - 0.228
125
+ - 0.224
126
+ - 0.225
127
+ device:
128
+ _target_: torch.device
129
+ type: ${analyzer.predictor.fer.device.type}
130
+ optimize_transform: ${analyzer.optimize_transforms}
131
+ reverse_colors: false
132
+ postprocessor:
133
+ _target_: facetorch.analyzer.predictor.post.PostEmbedder
134
+ transform: None
135
+ device:
136
+ _target_: torch.device
137
+ type: ${analyzer.predictor.fer.device.type}
138
+ optimize_transform: ${analyzer.optimize_transforms}
139
+ labels:
140
+ - abstract
141
+ verify:
142
+ _target_: facetorch.analyzer.predictor.FacePredictor
143
+ downloader:
144
+ _target_: facetorch.downloader.DownloaderGDrive
145
+ file_id: 1WI-mP_0mGW31OHfriPUsuFS_usYh_W8p
146
+ path_local: /code/models/torchscript/predictor/verify/2/model.pt
147
+ device:
148
+ _target_: torch.device
149
+ type: ${analyzer.device}
150
+ preprocessor:
151
+ _target_: facetorch.analyzer.predictor.pre.PredictorPreProcessor
152
+ transform:
153
+ _target_: torchvision.transforms.Compose
154
+ transforms:
155
+ - _target_: torchvision.transforms.Resize
156
+ size:
157
+ - 112
158
+ - 112
159
+ - _target_: torchvision.transforms.Normalize
160
+ mean:
161
+ - 0.5
162
+ - 0.5
163
+ - 0.5
164
+ std:
165
+ - 0.5
166
+ - 0.5
167
+ - 0.5
168
+ device:
169
+ _target_: torch.device
170
+ type: ${analyzer.predictor.verify.device.type}
171
+ optimize_transform: ${analyzer.optimize_transforms}
172
+ reverse_colors: true
173
+ postprocessor:
174
+ _target_: facetorch.analyzer.predictor.post.PostEmbedder
175
+ transform: None
176
+ device:
177
+ _target_: torch.device
178
+ type: ${analyzer.predictor.verify.device.type}
179
+ optimize_transform: ${analyzer.optimize_transforms}
180
+ labels:
181
+ - abstract
182
+ fer:
183
+ _target_: facetorch.analyzer.predictor.FacePredictor
184
+ downloader:
185
+ _target_: facetorch.downloader.DownloaderGDrive
186
+ file_id: 1xoB5VYOd0XLjb-rQqqHWCkQvma4NytEd
187
+ path_local: /code/models/torchscript/predictor/fer/2/model.pt
188
+ device:
189
+ _target_: torch.device
190
+ type: ${analyzer.device}
191
+ preprocessor:
192
+ _target_: facetorch.analyzer.predictor.pre.PredictorPreProcessor
193
+ transform:
194
+ _target_: torchvision.transforms.Compose
195
+ transforms:
196
+ - _target_: torchvision.transforms.Resize
197
+ size:
198
+ - 260
199
+ - 260
200
+ - _target_: torchvision.transforms.Normalize
201
+ mean:
202
+ - 0.485
203
+ - 0.456
204
+ - 0.406
205
+ std:
206
+ - 0.229
207
+ - 0.224
208
+ - 0.225
209
+ device:
210
+ _target_: torch.device
211
+ type: ${analyzer.predictor.fer.device.type}
212
+ optimize_transform: ${analyzer.optimize_transforms}
213
+ reverse_colors: false
214
+ postprocessor:
215
+ _target_: facetorch.analyzer.predictor.post.PostArgMax
216
+ transform: None
217
+ device:
218
+ _target_: torch.device
219
+ type: ${analyzer.predictor.fer.device.type}
220
+ optimize_transform: ${analyzer.optimize_transforms}
221
+ dim: 1
222
+ labels:
223
+ - Anger
224
+ - Contempt
225
+ - Disgust
226
+ - Fear
227
+ - Happiness
228
+ - Neutral
229
+ - Sadness
230
+ - Surprise
231
+ deepfake:
232
+ _target_: facetorch.analyzer.predictor.FacePredictor
233
+ downloader:
234
+ _target_: facetorch.downloader.DownloaderGDrive
235
+ file_id: 1GjDTwQpvrkCjXOdiBy1oMkzm7nt-bXFg
236
+ path_local: /code/models/torchscript/predictor/deepfake/1/model.pt
237
+ device:
238
+ _target_: torch.device
239
+ type: ${analyzer.device}
240
+ preprocessor:
241
+ _target_: facetorch.analyzer.predictor.pre.PredictorPreProcessor
242
+ transform:
243
+ _target_: torchvision.transforms.Compose
244
+ transforms:
245
+ - _target_: torchvision.transforms.Resize
246
+ size:
247
+ - 380
248
+ - 380
249
+ - _target_: torchvision.transforms.Normalize
250
+ mean:
251
+ - 0.485
252
+ - 0.456
253
+ - 0.406
254
+ std:
255
+ - 0.229
256
+ - 0.224
257
+ - 0.225
258
+ device:
259
+ _target_: torch.device
260
+ type: ${analyzer.device}
261
+ optimize_transform: ${analyzer.optimize_transforms}
262
+ reverse_colors: false
263
+ postprocessor:
264
+ _target_: facetorch.analyzer.predictor.post.PostSigmoidBinary
265
+ transform: None
266
+ device:
267
+ _target_: torch.device
268
+ type: ${analyzer.device}
269
+ optimize_transform: ${analyzer.optimize_transforms}
270
+ labels:
271
+ - Real
272
+ - Fake
273
+ threshold: 0.7
274
+ align:
275
+ _target_: facetorch.analyzer.predictor.FacePredictor
276
+ downloader:
277
+ _target_: facetorch.downloader.DownloaderGDrive
278
+ file_id: 16gNFQdEH2nWvW3zTbdIAniKIbPAp6qBA
279
+ path_local: /code/models/torchscript/predictor/align/1/model.pt
280
+ device:
281
+ _target_: torch.device
282
+ type: ${analyzer.device}
283
+ preprocessor:
284
+ _target_: facetorch.analyzer.predictor.pre.PredictorPreProcessor
285
+ transform:
286
+ _target_: torchvision.transforms.Compose
287
+ transforms:
288
+ - _target_: torchvision.transforms.Resize
289
+ size:
290
+ - 120
291
+ - 120
292
+ device:
293
+ _target_: torch.device
294
+ type: ${analyzer.predictor.align.device.type}
295
+ optimize_transform: ${analyzer.optimize_transforms}
296
+ reverse_colors: false
297
+ postprocessor:
298
+ _target_: facetorch.analyzer.predictor.post.PostEmbedder
299
+ transform: None
300
+ device:
301
+ _target_: torch.device
302
+ type: ${analyzer.predictor.align.device.type}
303
+ optimize_transform: ${analyzer.optimize_transforms}
304
+ labels:
305
+ - abstract
306
+ utilizer:
307
+ align:
308
+ _target_: facetorch.analyzer.utilizer.align.Lmk3DMeshPose
309
+ transform: None
310
+ device:
311
+ _target_: torch.device
312
+ type: ${analyzer.device}
313
+ optimize_transform: false
314
+ downloader_meta:
315
+ _target_: facetorch.downloader.DownloaderGDrive
316
+ file_id: 11tdAcFuSXqCCf58g52WT1Rpa8KuQwe2o
317
+ path_local: /code/data/3dmm/meta.pt
318
+ image_size: 120
319
+ draw_boxes:
320
+ _target_: facetorch.analyzer.utilizer.draw.BoxDrawer
321
+ transform: None
322
+ device:
323
+ _target_: torch.device
324
+ type: ${analyzer.device}
325
+ optimize_transform: false
326
+ color: green
327
+ line_width: 3
328
+ draw_landmarks:
329
+ _target_: facetorch.analyzer.utilizer.draw.LandmarkDrawerTorch
330
+ transform: None
331
+ device:
332
+ _target_: torch.device
333
+ type: ${analyzer.device}
334
+ optimize_transform: false
335
+ width: 2
336
+ color: green
337
+ logger:
338
+ _target_: facetorch.logger.LoggerJsonFile
339
+ name: facetorch
340
+ level: 20
341
+ path_file: /code/logs/facetorch/main.log
342
+ json_format: '%(asctime)s %(levelname)s %(message)s'
343
+ main:
344
+ sleep: 3
345
+ debug: true
346
+ batch_size: 8
347
+ fix_img_size: true
348
+ return_img_data: true
349
+ include_tensors: true
config.merged.yml ADDED
@@ -0,0 +1,349 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ analyzer:
2
+ device: cpu
3
+ optimize_transforms: true
4
+ reader:
5
+ _target_: facetorch.analyzer.reader.ImageReader
6
+ device:
7
+ _target_: torch.device
8
+ type: ${analyzer.device}
9
+ optimize_transform: ${analyzer.optimize_transforms}
10
+ transform:
11
+ _target_: torchvision.transforms.Compose
12
+ transforms:
13
+ - _target_: facetorch.transforms.SquarePad
14
+ - _target_: torchvision.transforms.Resize
15
+ size:
16
+ - 1080
17
+ detector:
18
+ _target_: facetorch.analyzer.detector.FaceDetector
19
+ downloader:
20
+ _target_: facetorch.downloader.DownloaderGDrive
21
+ file_id: 154x2VjmTQVqmowB0yZw4Uck7uQs2vVBs
22
+ path_local: /code/models/torchscript/detector/1/model.pt
23
+ device:
24
+ _target_: torch.device
25
+ type: ${analyzer.device}
26
+ reverse_colors: true
27
+ preprocessor:
28
+ _target_: facetorch.analyzer.detector.pre.DetectorPreProcessor
29
+ transform:
30
+ _target_: torchvision.transforms.Compose
31
+ transforms:
32
+ - _target_: torchvision.transforms.Normalize
33
+ mean:
34
+ - 104.0
35
+ - 117.0
36
+ - 123.0
37
+ std:
38
+ - 1.0
39
+ - 1.0
40
+ - 1.0
41
+ device:
42
+ _target_: torch.device
43
+ type: ${analyzer.device}
44
+ optimize_transform: ${analyzer.optimize_transforms}
45
+ reverse_colors: ${analyzer.detector.reverse_colors}
46
+ postprocessor:
47
+ _target_: facetorch.analyzer.detector.post.PostRetFace
48
+ transform: None
49
+ device:
50
+ _target_: torch.device
51
+ type: ${analyzer.device}
52
+ optimize_transform: ${analyzer.optimize_transforms}
53
+ confidence_threshold: 0.02
54
+ top_k: 5000
55
+ nms_threshold: 0.4
56
+ keep_top_k: 750
57
+ score_threshold: 0.6
58
+ prior_box:
59
+ _target_: facetorch.analyzer.detector.post.PriorBox
60
+ min_sizes:
61
+ - - 16
62
+ - 32
63
+ - - 64
64
+ - 128
65
+ - - 256
66
+ - 512
67
+ steps:
68
+ - 8
69
+ - 16
70
+ - 32
71
+ clip: false
72
+ variance:
73
+ - 0.1
74
+ - 0.2
75
+ reverse_colors: ${analyzer.detector.reverse_colors}
76
+ expand_box_ratio: 0.1
77
+ unifier:
78
+ _target_: facetorch.analyzer.unifier.FaceUnifier
79
+ transform:
80
+ _target_: torchvision.transforms.Compose
81
+ transforms:
82
+ - _target_: torchvision.transforms.Normalize
83
+ mean:
84
+ - -123.0
85
+ - -117.0
86
+ - -104.0
87
+ std:
88
+ - 255.0
89
+ - 255.0
90
+ - 255.0
91
+ - _target_: torchvision.transforms.Resize
92
+ size:
93
+ - 380
94
+ - 380
95
+ device:
96
+ _target_: torch.device
97
+ type: ${analyzer.device}
98
+ optimize_transform: ${analyzer.optimize_transforms}
99
+ predictor:
100
+ embed:
101
+ _target_: facetorch.analyzer.predictor.FacePredictor
102
+ downloader:
103
+ _target_: facetorch.downloader.DownloaderGDrive
104
+ file_id: 19h3kqar1wlELAmM5hDyj9tlrUh8yjrCl
105
+ path_local: /code/models/torchscript/predictor/embed/1/model.pt
106
+ device:
107
+ _target_: torch.device
108
+ type: ${analyzer.device}
109
+ preprocessor:
110
+ _target_: facetorch.analyzer.predictor.pre.PredictorPreProcessor
111
+ transform:
112
+ _target_: torchvision.transforms.Compose
113
+ transforms:
114
+ - _target_: torchvision.transforms.Resize
115
+ size:
116
+ - 244
117
+ - 244
118
+ - _target_: torchvision.transforms.Normalize
119
+ mean:
120
+ - 0.485
121
+ - 0.456
122
+ - 0.406
123
+ std:
124
+ - 0.228
125
+ - 0.224
126
+ - 0.225
127
+ device:
128
+ _target_: torch.device
129
+ type: ${analyzer.predictor.fer.device.type}
130
+ optimize_transform: ${analyzer.optimize_transforms}
131
+ reverse_colors: false
132
+ postprocessor:
133
+ _target_: facetorch.analyzer.predictor.post.PostEmbedder
134
+ transform: None
135
+ device:
136
+ _target_: torch.device
137
+ type: ${analyzer.predictor.fer.device.type}
138
+ optimize_transform: ${analyzer.optimize_transforms}
139
+ labels:
140
+ - abstract
141
+ verify:
142
+ _target_: facetorch.analyzer.predictor.FacePredictor
143
+ downloader:
144
+ _target_: facetorch.downloader.DownloaderGDrive
145
+ file_id: 1H-aPtFd9C5D7y1vzoWsObKAxeIBE9QPd
146
+ path_local: /code/models/torchscript/predictor/verify/1/model.pt
147
+ device:
148
+ _target_: torch.device
149
+ type: ${analyzer.device}
150
+ preprocessor:
151
+ _target_: facetorch.analyzer.predictor.pre.PredictorPreProcessor
152
+ transform:
153
+ _target_: torchvision.transforms.Compose
154
+ transforms:
155
+ - _target_: torchvision.transforms.Resize
156
+ size:
157
+ - 112
158
+ - 112
159
+ - _target_: torchvision.transforms.Normalize
160
+ mean:
161
+ - 0.485
162
+ - 0.456
163
+ - 0.406
164
+ std:
165
+ - 0.229
166
+ - 0.224
167
+ - 0.225
168
+ device:
169
+ _target_: torch.device
170
+ type: ${analyzer.predictor.verify.device.type}
171
+ optimize_transform: ${analyzer.optimize_transforms}
172
+ reverse_colors: false
173
+ postprocessor:
174
+ _target_: facetorch.analyzer.predictor.post.PostEmbedder
175
+ transform: None
176
+ device:
177
+ _target_: torch.device
178
+ type: ${analyzer.predictor.verify.device.type}
179
+ optimize_transform: ${analyzer.optimize_transforms}
180
+ labels:
181
+ - abstract
182
+ fer:
183
+ _target_: facetorch.analyzer.predictor.FacePredictor
184
+ downloader:
185
+ _target_: facetorch.downloader.DownloaderGDrive
186
+ file_id: 1xoB5VYOd0XLjb-rQqqHWCkQvma4NytEd
187
+ path_local: /code/models/torchscript/predictor/fer/2/model.pt
188
+ device:
189
+ _target_: torch.device
190
+ type: ${analyzer.device}
191
+ preprocessor:
192
+ _target_: facetorch.analyzer.predictor.pre.PredictorPreProcessor
193
+ transform:
194
+ _target_: torchvision.transforms.Compose
195
+ transforms:
196
+ - _target_: torchvision.transforms.Resize
197
+ size:
198
+ - 260
199
+ - 260
200
+ - _target_: torchvision.transforms.Normalize
201
+ mean:
202
+ - 0.485
203
+ - 0.456
204
+ - 0.406
205
+ std:
206
+ - 0.229
207
+ - 0.224
208
+ - 0.225
209
+ device:
210
+ _target_: torch.device
211
+ type: ${analyzer.predictor.fer.device.type}
212
+ optimize_transform: ${analyzer.optimize_transforms}
213
+ reverse_colors: false
214
+ postprocessor:
215
+ _target_: facetorch.analyzer.predictor.post.PostArgMax
216
+ transform: None
217
+ device:
218
+ _target_: torch.device
219
+ type: ${analyzer.predictor.fer.device.type}
220
+ optimize_transform: ${analyzer.optimize_transforms}
221
+ dim: 1
222
+ labels:
223
+ - Anger
224
+ - Contempt
225
+ - Disgust
226
+ - Fear
227
+ - Happiness
228
+ - Neutral
229
+ - Sadness
230
+ - Surprise
231
+ deepfake:
232
+ _target_: facetorch.analyzer.predictor.FacePredictor
233
+ downloader:
234
+ _target_: facetorch.downloader.DownloaderGDrive
235
+ file_id: 1GjDTwQpvrkCjXOdiBy1oMkzm7nt-bXFg
236
+ path_local: /code/models/torchscript/predictor/deepfake/1/model.pt
237
+ device:
238
+ _target_: torch.device
239
+ type: ${analyzer.device}
240
+ preprocessor:
241
+ _target_: facetorch.analyzer.predictor.pre.PredictorPreProcessor
242
+ transform:
243
+ _target_: torchvision.transforms.Compose
244
+ transforms:
245
+ - _target_: torchvision.transforms.Resize
246
+ size:
247
+ - 380
248
+ - 380
249
+ - _target_: torchvision.transforms.Normalize
250
+ mean:
251
+ - 0.485
252
+ - 0.456
253
+ - 0.406
254
+ std:
255
+ - 0.229
256
+ - 0.224
257
+ - 0.225
258
+ device:
259
+ _target_: torch.device
260
+ type: ${analyzer.device}
261
+ optimize_transform: ${analyzer.optimize_transforms}
262
+ reverse_colors: false
263
+ postprocessor:
264
+ _target_: facetorch.analyzer.predictor.post.PostSigmoidBinary
265
+ transform: None
266
+ device:
267
+ _target_: torch.device
268
+ type: ${analyzer.device}
269
+ optimize_transform: ${analyzer.optimize_transforms}
270
+ labels:
271
+ - Real
272
+ - Fake
273
+ threshold: 0.7
274
+ align:
275
+ _target_: facetorch.analyzer.predictor.FacePredictor
276
+ downloader:
277
+ _target_: facetorch.downloader.DownloaderGDrive
278
+ file_id: 16gNFQdEH2nWvW3zTbdIAniKIbPAp6qBA
279
+ path_local: /code/models/torchscript/predictor/align/1/model.pt
280
+ device:
281
+ _target_: torch.device
282
+ type: ${analyzer.device}
283
+ preprocessor:
284
+ _target_: facetorch.analyzer.predictor.pre.PredictorPreProcessor
285
+ transform:
286
+ _target_: torchvision.transforms.Compose
287
+ transforms:
288
+ - _target_: torchvision.transforms.Resize
289
+ size:
290
+ - 120
291
+ - 120
292
+ device:
293
+ _target_: torch.device
294
+ type: ${analyzer.predictor.align.device.type}
295
+ optimize_transform: ${analyzer.optimize_transforms}
296
+ reverse_colors: false
297
+ postprocessor:
298
+ _target_: facetorch.analyzer.predictor.post.PostEmbedder
299
+ transform: None
300
+ device:
301
+ _target_: torch.device
302
+ type: ${analyzer.predictor.align.device.type}
303
+ optimize_transform: ${analyzer.optimize_transforms}
304
+ labels:
305
+ - abstract
306
+ utilizer:
307
+ align:
308
+ _target_: facetorch.analyzer.utilizer.align.Lmk3DMeshPose
309
+ transform: None
310
+ device:
311
+ _target_: torch.device
312
+ type: ${analyzer.device}
313
+ optimize_transform: false
314
+ downloader_meta:
315
+ _target_: facetorch.downloader.DownloaderGDrive
316
+ file_id: 11tdAcFuSXqCCf58g52WT1Rpa8KuQwe2o
317
+ path_local: /code/data/3dmm/meta.pt
318
+ image_size: 120
319
+ draw_boxes:
320
+ _target_: facetorch.analyzer.utilizer.draw.BoxDrawer
321
+ transform: None
322
+ device:
323
+ _target_: torch.device
324
+ type: ${analyzer.device}
325
+ optimize_transform: false
326
+ color: green
327
+ line_width: 3
328
+ draw_landmarks:
329
+ _target_: facetorch.analyzer.utilizer.draw.LandmarkDrawerTorch
330
+ transform: None
331
+ device:
332
+ _target_: torch.device
333
+ type: ${analyzer.device}
334
+ optimize_transform: false
335
+ width: 2
336
+ color: green
337
+ logger:
338
+ _target_: facetorch.logger.LoggerJsonFile
339
+ name: facetorch
340
+ level: 10
341
+ path_file: /code/logs/facetorch/main.log
342
+ json_format: '%(asctime)s %(levelname)s %(message)s'
343
+ main:
344
+ sleep: 3
345
+ debug: true
346
+ batch_size: 8
347
+ fix_img_size: true
348
+ return_img_data: true
349
+ include_tensors: true
requirements.txt ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ facetorch>=0.1.4
2
+ omegaconf==2.2.2
test.jpg ADDED
test10.jpg ADDED
test2.jpg ADDED
test3.jpg ADDED
test4.jpg ADDED
test5.jpg ADDED
test6.jpg ADDED
test8.jpg ADDED