Dev-mohamed commited on
Commit
7c80b60
·
verified ·
1 Parent(s): 3888132

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +106 -283
README.md CHANGED
@@ -1,377 +1,200 @@
1
- # deepface
 
 
 
 
2
 
3
- <div align="center">
4
 
5
- [![PyPI Downloads](https://static.pepy.tech/personalized-badge/deepface?period=total&units=international_system&left_color=grey&right_color=blue&left_text=pypi%20downloads)](https://pepy.tech/project/deepface)
6
- [![Conda Downloads](https://img.shields.io/conda/dn/conda-forge/deepface?color=green&label=conda%20downloads)](https://anaconda.org/conda-forge/deepface)
7
- [![Stars](https://img.shields.io/github/stars/serengil/deepface?color=yellow&style=flat&label=%E2%AD%90%20stars)](https://github.com/serengil/deepface/stargazers)
8
- [![License](http://img.shields.io/:license-MIT-green.svg?style=flat)](https://github.com/serengil/deepface/blob/master/LICENSE)
9
- [![Tests](https://github.com/serengil/deepface/actions/workflows/tests.yml/badge.svg)](https://github.com/serengil/deepface/actions/workflows/tests.yml)
10
 
11
- [![Blog](https://img.shields.io/:blog-sefiks.com-blue.svg?style=flat&logo=wordpress)](https://sefiks.com)
12
- [![YouTube](https://img.shields.io/:youtube-@sefiks-red.svg?style=flat&logo=youtube)](https://www.youtube.com/@sefiks?sub_confirmation=1)
13
- [![Twitter](https://img.shields.io/:follow-@serengil-blue.svg?style=flat&logo=twitter)](https://twitter.com/intent/user?screen_name=serengil)
14
- [![Support me on Patreon](https://img.shields.io/endpoint.svg?url=https%3A%2F%2Fshieldsio-patreon.vercel.app%2Fapi%3Fusername%3Dserengil%26type%3Dpatrons&style=flat)](https://www.patreon.com/serengil?repo=deepface)
15
- [![GitHub Sponsors](https://img.shields.io/github/sponsors/serengil?logo=GitHub&color=lightgray)](https://github.com/sponsors/serengil)
16
 
17
- [![DOI](http://img.shields.io/:DOI-10.1109/ASYU50717.2020.9259802-blue.svg?style=flat)](https://doi.org/10.1109/ASYU50717.2020.9259802)
18
- [![DOI](http://img.shields.io/:DOI-10.1109/ICEET53442.2021.9659697-blue.svg?style=flat)](https://doi.org/10.1109/ICEET53442.2021.9659697)
19
 
20
- </div>
21
 
22
- <p align="center"><img src="https://raw.githubusercontent.com/serengil/deepface/master/icon/deepface-icon-labeled.png" width="200" height="240"></p>
23
 
24
- Deepface is a lightweight [face recognition](https://sefiks.com/2018/08/06/deep-face-recognition-with-keras/) and facial attribute analysis ([age](https://sefiks.com/2019/02/13/apparent-age-and-gender-prediction-in-keras/), [gender](https://sefiks.com/2019/02/13/apparent-age-and-gender-prediction-in-keras/), [emotion](https://sefiks.com/2018/01/01/facial-expression-recognition-with-keras/) and [race](https://sefiks.com/2019/11/11/race-and-ethnicity-prediction-in-keras/)) framework for python. It is a hybrid face recognition framework wrapping **state-of-the-art** models: [`VGG-Face`](https://sefiks.com/2018/08/06/deep-face-recognition-with-keras/), [`FaceNet`](https://sefiks.com/2018/09/03/face-recognition-with-facenet-in-keras/), [`OpenFace`](https://sefiks.com/2019/07/21/face-recognition-with-openface-in-keras/), [`DeepFace`](https://sefiks.com/2020/02/17/face-recognition-with-facebook-deepface-in-keras/), [`DeepID`](https://sefiks.com/2020/06/16/face-recognition-with-deepid-in-keras/), [`ArcFace`](https://sefiks.com/2020/12/14/deep-face-recognition-with-arcface-in-keras-and-python/), [`Dlib`](https://sefiks.com/2020/07/11/face-recognition-with-dlib-in-python/), `SFace` and `GhostFaceNet`.
25
 
26
- Experiments show that human beings have 97.53% accuracy on facial recognition tasks whereas those models already reached and passed that accuracy level.
27
 
28
- ## Installation [![PyPI](https://img.shields.io/pypi/v/deepface.svg)](https://pypi.org/project/deepface/) [![Conda](https://img.shields.io/conda/vn/conda-forge/deepface.svg)](https://anaconda.org/conda-forge/deepface)
 
 
 
 
 
 
29
 
30
- The easiest way to install deepface is to download it from [`PyPI`](https://pypi.org/project/deepface/). It's going to install the library itself and its prerequisites as well.
31
 
32
- ```shell
33
- $ pip install deepface
34
- ```
35
 
36
- Secondly, DeepFace is also available at [`Conda`](https://anaconda.org/conda-forge/deepface). You can alternatively install the package via conda.
 
 
37
 
38
- ```shell
39
- $ conda install -c conda-forge deepface
40
- ```
41
 
42
- Thirdly, you can install deepface from its source code.
43
 
44
- ```shell
45
- $ git clone https://github.com/serengil/deepface.git
46
- $ cd deepface
47
- $ pip install -e .
48
- ```
49
 
50
- Then you will be able to import the library and use its functionalities.
51
 
52
- ```python
53
- from deepface import DeepFace
54
- ```
55
 
56
- **Facial Recognition** - [`Demo`](https://youtu.be/WnUVYQP4h44)
57
 
58
- A modern [**face recognition pipeline**](https://sefiks.com/2020/05/01/a-gentle-introduction-to-face-recognition-in-deep-learning/) consists of 5 common stages: [detect](https://sefiks.com/2020/08/25/deep-face-detection-with-opencv-in-python/), [align](https://sefiks.com/2020/02/23/face-alignment-for-face-recognition-in-python-within-opencv/), [normalize](https://sefiks.com/2020/11/20/facial-landmarks-for-face-recognition-with-dlib/), [represent](https://sefiks.com/2018/08/06/deep-face-recognition-with-keras/) and [verify](https://sefiks.com/2020/05/22/fine-tuning-the-threshold-in-face-recognition/). While Deepface handles all these common stages in the background, you don’t need to acquire in-depth knowledge about all the processes behind it. You can just call its verification, find or analysis function with a single line of code.
59
 
60
- **Face Verification** - [`Demo`](https://youtu.be/KRCvkNCOphE)
61
 
62
- This function verifies face pairs as same person or different persons. It expects exact image paths as inputs. Passing numpy or base64 encoded images is also welcome. Then, it is going to return a dictionary and you should check just its verified key.
63
 
64
- ```python
65
- result = DeepFace.verify(img1_path = "img1.jpg", img2_path = "img2.jpg")
66
- ```
67
 
68
- <p align="center"><img src="https://raw.githubusercontent.com/serengil/deepface/master/icon/stock-1.jpg" width="95%" height="95%"></p>
69
 
70
- **Face recognition** - [`Demo`](https://youtu.be/Hrjp-EStM_s)
71
 
72
- [Face recognition](https://sefiks.com/2020/05/25/large-scale-face-recognition-for-deep-learning/) requires applying face verification many times. Herein, deepface has an out-of-the-box find function to handle this action. It's going to look for the identity of input image in the database path and it will return list of pandas data frame as output. Meanwhile, facial embeddings of the facial database are stored in a pickle file to be searched faster in next time. Result is going to be the size of faces appearing in the source image. Besides, target images in the database can have many faces as well.
73
 
 
74
 
75
- ```python
76
- dfs = DeepFace.find(img_path = "img1.jpg", db_path = "C:/workspace/my_db")
77
- ```
78
 
79
- <p align="center"><img src="https://raw.githubusercontent.com/serengil/deepface/master/icon/stock-6-v2.jpg" width="95%" height="95%"></p>
80
 
81
- **Embeddings**
82
 
83
- Face recognition models basically represent facial images as multi-dimensional vectors. Sometimes, you need those embedding vectors directly. DeepFace comes with a dedicated representation function. Represent function returns a list of embeddings. Result is going to be the size of faces appearing in the image path.
84
 
85
- ```python
86
- embedding_objs = DeepFace.represent(img_path = "img.jpg")
87
- ```
88
 
89
- This function returns an array as embedding. The size of the embedding array would be different based on the model name. For instance, VGG-Face is the default model and it represents facial images as 4096 dimensional vectors.
90
 
91
- ```python
92
- embedding = embedding_objs[0]["embedding"]
93
- assert isinstance(embedding, list)
94
- assert model_name == "VGG-Face" and len(embedding) == 4096
95
- ```
96
 
97
- Here, embedding is also [plotted](https://sefiks.com/2020/05/01/a-gentle-introduction-to-face-recognition-in-deep-learning/) with 4096 slots horizontally. Each slot is corresponding to a dimension value in the embedding vector and dimension value is explained in the colorbar on the right. Similar to 2D barcodes, vertical dimension stores no information in the illustration.
98
 
99
- <p align="center"><img src="https://raw.githubusercontent.com/serengil/deepface/master/icon/embedding.jpg" width="95%" height="95%"></p>
100
 
101
- **Face recognition models** - [`Demo`](https://youtu.be/i_MOwvhbLdI)
102
 
103
- Deepface is a **hybrid** face recognition package. It currently wraps many **state-of-the-art** face recognition models: [`VGG-Face`](https://sefiks.com/2018/08/06/deep-face-recognition-with-keras/) , [`FaceNet`](https://sefiks.com/2018/09/03/face-recognition-with-facenet-in-keras/), [`OpenFace`](https://sefiks.com/2019/07/21/face-recognition-with-openface-in-keras/), [`DeepFace`](https://sefiks.com/2020/02/17/face-recognition-with-facebook-deepface-in-keras/), [`DeepID`](https://sefiks.com/2020/06/16/face-recognition-with-deepid-in-keras/), [`ArcFace`](https://sefiks.com/2020/12/14/deep-face-recognition-with-arcface-in-keras-and-python/), [`Dlib`](https://sefiks.com/2020/07/11/face-recognition-with-dlib-in-python/), `SFace` and `GhostFaceNet`. The default configuration uses VGG-Face model.
104
 
105
- ```python
106
- models = [
107
- "VGG-Face",
108
- "Facenet",
109
- "Facenet512",
110
- "OpenFace",
111
- "DeepFace",
112
- "DeepID",
113
- "ArcFace",
114
- "Dlib",
115
- "SFace",
116
- "GhostFaceNet",
117
- ]
118
 
119
- #face verification
120
- result = DeepFace.verify(img1_path = "img1.jpg",
121
- img2_path = "img2.jpg",
122
- model_name = models[0]
123
- )
124
 
125
- #face recognition
126
- dfs = DeepFace.find(img_path = "img1.jpg",
127
- db_path = "C:/workspace/my_db",
128
- model_name = models[1]
129
- )
130
 
131
- #embeddings
132
- embedding_objs = DeepFace.represent(img_path = "img.jpg",
133
- model_name = models[2]
134
- )
135
- ```
136
 
137
- <p align="center"><img src="https://raw.githubusercontent.com/serengil/deepface/master/icon/model-portfolio-20240316.jpg" width="95%" height="95%"></p>
138
 
139
- FaceNet, VGG-Face, ArcFace and Dlib are [overperforming](https://youtu.be/i_MOwvhbLdI) ones based on experiments. You can find out the scores of those models below on [Labeled Faces in the Wild](https://sefiks.com/2020/08/27/labeled-faces-in-the-wild-for-face-recognition/) set declared by its creators.
140
 
141
- | Model | Declared LFW Score |
142
- | -------------- | ------------------ |
143
- | VGG-Face | 98.9% |
144
- | Facenet | 99.2% |
145
- | Facenet512 | 99.6% |
146
- | OpenFace | 92.9% |
147
- | DeepID | 97.4% |
148
- | Dlib | 99.3 % |
149
- | SFace | 99.5% |
150
- | ArcFace | 99.5% |
151
- | GhostFaceNet | 99.7% |
152
- | *Human-beings* | *97.5%* |
153
 
154
- Conducting experiments with those models within DeepFace may reveal disparities compared to the original studies, owing to the adoption of distinct detection or normalization techniques. Furthermore, some models have been released solely with their backbones, lacking pre-trained weights. Thus, we are utilizing their re-implementations instead of the original pre-trained weights.
155
 
156
- **Similarity**
157
 
158
- Face recognition models are regular [convolutional neural networks](https://sefiks.com/2018/03/23/convolutional-autoencoder-clustering-images-with-neural-networks/) and they are responsible to represent faces as vectors. We expect that a face pair of same person should be [more similar](https://sefiks.com/2020/05/22/fine-tuning-the-threshold-in-face-recognition/) than a face pair of different persons.
159
 
160
- Similarity could be calculated by different metrics such as [Cosine Similarity](https://sefiks.com/2018/08/13/cosine-similarity-in-machine-learning/), Euclidean Distance and L2 form. The default configuration uses cosine similarity.
161
 
162
- ```python
163
- metrics = ["cosine", "euclidean", "euclidean_l2"]
164
 
165
- #face verification
166
- result = DeepFace.verify(img1_path = "img1.jpg",
167
- img2_path = "img2.jpg",
168
- distance_metric = metrics[1]
169
- )
170
 
171
- #face recognition
172
- dfs = DeepFace.find(img_path = "img1.jpg",
173
- db_path = "C:/workspace/my_db",
174
- distance_metric = metrics[2]
175
- )
176
- ```
177
 
178
- Euclidean L2 form [seems](https://youtu.be/i_MOwvhbLdI) to be more stable than cosine and regular Euclidean distance based on experiments.
179
 
180
- **Facial Attribute Analysis** - [`Demo`](https://youtu.be/GT2UeN85BdA)
181
 
182
- Deepface also comes with a strong facial attribute analysis module including [`age`](https://sefiks.com/2019/02/13/apparent-age-and-gender-prediction-in-keras/), [`gender`](https://sefiks.com/2019/02/13/apparent-age-and-gender-prediction-in-keras/), [`facial expression`](https://sefiks.com/2018/01/01/facial-expression-recognition-with-keras/) (including angry, fear, neutral, sad, disgust, happy and surprise) and [`race`](https://sefiks.com/2019/11/11/race-and-ethnicity-prediction-in-keras/) (including asian, white, middle eastern, indian, latino and black) predictions. Result is going to be the size of faces appearing in the source image.
183
 
184
- ```python
185
- objs = DeepFace.analyze(img_path = "img4.jpg",
186
- actions = ['age', 'gender', 'race', 'emotion']
187
- )
188
- ```
189
 
190
- <p align="center"><img src="https://raw.githubusercontent.com/serengil/deepface/master/icon/stock-2.jpg" width="95%" height="95%"></p>
191
 
192
- Age model got ± 4.65 MAE; gender model got 97.44% accuracy, 96.29% precision and 95.05% recall as mentioned in its [tutorial](https://sefiks.com/2019/02/13/apparent-age-and-gender-prediction-in-keras/).
193
 
 
194
 
195
- **Face Detectors** - [`Demo`](https://youtu.be/GZ2p2hj2H5k)
196
 
197
- Face detection and alignment are important early stages of a modern face recognition pipeline. Experiments show that just alignment increases the face recognition accuracy almost 1%. [`OpenCV`](https://sefiks.com/2020/02/23/face-alignment-for-face-recognition-in-python-within-opencv/), [`Ssd`](https://sefiks.com/2020/08/25/deep-face-detection-with-opencv-in-python/), [`Dlib`](https://sefiks.com/2020/07/11/face-recognition-with-dlib-in-python/), [`MtCnn`](https://sefiks.com/2020/09/09/deep-face-detection-with-mtcnn-in-python/), `Faster MTCNN`, [`RetinaFace`](https://sefiks.com/2021/04/27/deep-face-detection-with-retinaface-in-python/), [`MediaPipe`](https://sefiks.com/2022/01/14/deep-face-detection-with-mediapipe/), `Yolo`, `YuNet` and `CenterFace` detectors are wrapped in deepface.
198
 
199
- <p align="center"><img src="https://raw.githubusercontent.com/serengil/deepface/master/icon/detector-portfolio-v6.jpg" width="95%" height="95%"></p>
200
 
201
- All deepface functions accept an optional detector backend input argument. You can switch among those detectors with this argument. OpenCV is the default detector.
202
 
203
- ```python
204
- backends = [
205
- 'opencv',
206
- 'ssd',
207
- 'dlib',
208
- 'mtcnn',
209
- 'fastmtcnn',
210
- 'retinaface',
211
- 'mediapipe',
212
- 'yolov8',
213
- 'yunet',
214
- 'centerface',
215
- ]
216
 
217
- #face verification
218
- obj = DeepFace.verify(img1_path = "img1.jpg",
219
- img2_path = "img2.jpg",
220
- detector_backend = backends[0]
221
- )
222
 
223
- #face recognition
224
- dfs = DeepFace.find(img_path = "img.jpg",
225
- db_path = "my_db",
226
- detector_backend = backends[1]
227
- )
228
 
229
- #embeddings
230
- embedding_objs = DeepFace.represent(img_path = "img.jpg",
231
- detector_backend = backends[2]
232
- )
233
 
234
- #facial analysis
235
- demographies = DeepFace.analyze(img_path = "img4.jpg",
236
- detector_backend = backends[3]
237
- )
238
 
239
- #face detection and alignment
240
- face_objs = DeepFace.extract_faces(img_path = "img.jpg",
241
- detector_backend = backends[4]
242
- )
243
- ```
244
 
245
- Face recognition models are actually CNN models and they expect standard sized inputs. So, resizing is required before representation. To avoid deformation, deepface adds black padding pixels according to the target size argument after detection and alignment.
246
 
247
- <p align="center"><img src="https://raw.githubusercontent.com/serengil/deepface/master/icon/detector-outputs-20240414.jpg" width="90%" height="90%"></p>
 
 
 
 
248
 
249
- [RetinaFace](https://sefiks.com/2021/04/27/deep-face-detection-with-retinaface-in-python/) and [MTCNN](https://sefiks.com/2020/09/09/deep-face-detection-with-mtcnn-in-python/) seem to overperform in detection and alignment stages but they are much slower. If the speed of your pipeline is more important, then you should use opencv or ssd. On the other hand, if you consider the accuracy, then you should use retinaface or mtcnn.
250
 
251
- The performance of RetinaFace is very satisfactory even in the crowd as seen in the following illustration. Besides, it comes with an incredible facial landmark detection performance. Highlighted red points show some facial landmarks such as eyes, nose and mouth. That's why, alignment score of RetinaFace is high as well.
252
 
253
- <p align="center"><img src="https://raw.githubusercontent.com/serengil/deepface/master/icon/retinaface-results.jpeg" width="90%" height="90%">
254
- <br><em>The Yellow Angels - Fenerbahce Women's Volleyball Team</em>
255
- </p>
256
 
257
- You can find out more about RetinaFace on this [repo](https://github.com/serengil/retinaface).
258
 
259
- **Real Time Analysis** - [`Demo`](https://youtu.be/-c9sSJcx6wI)
260
 
261
- You can run deepface for real time videos as well. Stream function will access your webcam and apply both face recognition and facial attribute analysis. The function starts to analyze a frame if it can focus a face sequentially 5 frames. Then, it shows results 5 seconds.
262
 
263
- ```python
264
- DeepFace.stream(db_path = "C:/User/Sefik/Desktop/database")
265
- ```
266
 
267
- <p align="center"><img src="https://raw.githubusercontent.com/serengil/deepface/master/icon/stock-3.jpg" width="90%" height="90%"></p>
268
 
269
- Even though face recognition is based on one-shot learning, you can use multiple face pictures of a person as well. You should rearrange your directory structure as illustrated below.
270
 
271
- ```bash
272
- user
273
- ├── database
274
- │ ├── Alice
275
- │ │ ├── Alice1.jpg
276
- │ │ ├── Alice2.jpg
277
- │ ├── Bob
278
- │ │ ├── Bob.jpg
279
- ```
280
 
281
- **API** - [`Demo`](https://youtu.be/HeKCQ6U9XmI)
282
 
283
- DeepFace serves an API as well - see [`api folder`](https://github.com/serengil/deepface/tree/master/deepface/api/src) for more details. You can clone deepface source code and run the api with the following command. It will use gunicorn server to get a rest service up. In this way, you can call deepface from an external system such as mobile app or web.
284
 
285
- ```shell
286
- cd scripts
287
- ./service.sh
288
- ```
289
 
290
- <p align="center"><img src="https://raw.githubusercontent.com/serengil/deepface/master/icon/deepface-api.jpg" width="90%" height="90%"></p>
291
 
292
- Face recognition, facial attribute analysis and vector representation functions are covered in the API. You are expected to call these functions as http post methods. Default service endpoints will be `http://localhost:5000/verify` for face recognition, `http://localhost:5000/analyze` for facial attribute analysis, and `http://localhost:5000/represent` for vector representation. You can pass input images as exact image paths on your environment, base64 encoded strings or images on web. [Here](https://github.com/serengil/deepface/tree/master/deepface/api/postman), you can find a postman project to find out how these methods should be called.
293
 
294
- **Dockerized Service**
295
 
296
- You can deploy the deepface api on a kubernetes cluster with docker. The following [shell script](https://github.com/serengil/deepface/blob/master/scripts/dockerize.sh) will serve deepface on `localhost:5000`. You need to re-configure the [Dockerfile](https://github.com/serengil/deepface/blob/master/Dockerfile) if you want to change the port. Then, even if you do not have a development environment, you will be able to consume deepface services such as verify and analyze. You can also access the inside of the docker image to run deepface related commands. Please follow the instructions in the [shell script](https://github.com/serengil/deepface/blob/master/scripts/dockerize.sh).
297
 
298
- ```shell
299
- cd scripts
300
- ./dockerize.sh
301
- ```
302
 
303
- <p align="center"><img src="https://raw.githubusercontent.com/serengil/deepface/master/icon/deepface-dockerized-v2.jpg" width="50%" height="50%"></p>
304
 
305
- **Command Line Interface** - [`Demo`](https://youtu.be/PKKTAr3ts2s)
306
 
307
- DeepFace comes with a command line interface as well. You are able to access its functions in command line as shown below. The command deepface expects the function name as 1st argument and function arguments thereafter.
308
 
309
- ```shell
310
- #face verification
311
- $ deepface verify -img1_path tests/dataset/img1.jpg -img2_path tests/dataset/img2.jpg
312
 
313
- #facial analysis
314
- $ deepface analyze -img_path tests/dataset/img1.jpg
315
- ```
316
 
317
- You can also run these commands if you are running deepface with docker. Please follow the instructions in the [shell script](https://github.com/serengil/deepface/blob/master/scripts/dockerize.sh#L17).
318
-
319
- ## Contribution
320
-
321
- Pull requests are more than welcome! If you are planning to contribute a large patch, please create an issue first to get any upfront questions or design decisions out of the way first.
322
-
323
- Before creating a PR, you should run the unit tests and linting locally by running `make test && make lint` command. Once a PR sent, GitHub test workflow will be run automatically and unit test and linting jobs will be available in [GitHub actions](https://github.com/serengil/deepface/actions) before approval.
324
-
325
- ## Support
326
-
327
- There are many ways to support a project - starring⭐️ the GitHub repo is just one 🙏
328
-
329
- You can also support this work on [Patreon](https://www.patreon.com/serengil?repo=deepface) or [GitHub Sponsors](https://github.com/sponsors/serengil).
330
-
331
- <a href="https://www.patreon.com/serengil?repo=deepface">
332
- <img src="https://raw.githubusercontent.com/serengil/deepface/master/icon/patreon.png" width="30%" height="30%">
333
- </a>
334
-
335
- ## Citation
336
-
337
- Please cite deepface in your publications if it helps your research - see [`CITATIONS`](https://github.com/serengil/deepface/blob/master/CITATION.md) for more details. Here are its BibTex entries:
338
-
339
- If you use deepface in your research for facial recogntion purposes, please cite this publication.
340
-
341
- ```BibTeX
342
- @inproceedings{serengil2020lightface,
343
- title = {LightFace: A Hybrid Deep Face Recognition Framework},
344
- author = {Serengil, Sefik Ilkin and Ozpinar, Alper},
345
- booktitle = {2020 Innovations in Intelligent Systems and Applications Conference (ASYU)},
346
- pages = {23-27},
347
- year = {2020},
348
- doi = {10.1109/ASYU50717.2020.9259802},
349
- url = {https://ieeexplore.ieee.org/document/9259802},
350
- organization = {IEEE}
351
- }
352
- ```
353
-
354
- If you use deepface in your research for facial attribute analysis purposes such as age, gender, emotion or ethnicity prediction, please cite this publication.
355
-
356
- ```BibTeX
357
- @inproceedings{serengil2021lightface,
358
- title = {HyperExtended LightFace: A Facial Attribute Analysis Framework},
359
- author = {Serengil, Sefik Ilkin and Ozpinar, Alper},
360
- booktitle = {2021 International Conference on Engineering and Emerging Technologies (ICEET)},
361
- pages = {1-4},
362
- year = {2021},
363
- doi = {10.1109/ICEET53442.2021.9659697},
364
- url = {https://ieeexplore.ieee.org/document/9659697},
365
- organization = {IEEE}
366
- }
367
- ```
368
-
369
- Also, if you use deepface in your GitHub projects, please add `deepface` in the `requirements.txt`.
370
-
371
- ## Licence
372
-
373
- DeepFace is licensed under the MIT License - see [`LICENSE`](https://github.com/serengil/deepface/blob/master/LICENSE) for more details.
374
-
375
- DeepFace wraps some external face recognition models: [VGG-Face](http://www.robots.ox.ac.uk/~vgg/software/vgg_face/), [Facenet](https://github.com/davidsandberg/facenet/blob/master/LICENSE.md), [OpenFace](https://github.com/iwantooxxoox/Keras-OpenFace/blob/master/LICENSE), [DeepFace](https://github.com/swghosh/DeepFace), [DeepID](https://github.com/Ruoyiran/DeepID/blob/master/LICENSE.md), [ArcFace](https://github.com/leondgarse/Keras_insightface/blob/master/LICENSE), [Dlib](https://github.com/davisking/dlib/blob/master/dlib/LICENSE.txt), [SFace](https://github.com/opencv/opencv_zoo/blob/master/models/face_recognition_sface/LICENSE) and [GhostFaceNet](https://github.com/HamadYA/GhostFaceNets/blob/main/LICENSE). Besides, age, gender and race / ethnicity models were trained on the backbone of VGG-Face with transfer learning. Similarly, DeepFace wraps many face detectors: [OpenCv](https://github.com/opencv/opencv/blob/4.x/LICENSE), [Ssd](https://github.com/opencv/opencv/blob/master/LICENSE), [Dlib](https://github.com/davisking/dlib/blob/master/LICENSE.txt), [MtCnn](https://github.com/ipazc/mtcnn/blob/master/LICENSE), [Fast MtCnn](https://github.com/timesler/facenet-pytorch/blob/master/LICENSE.md), [RetinaFace](https://github.com/serengil/retinaface/blob/master/LICENSE), [MediaPipe](https://github.com/google/mediapipe/blob/master/LICENSE), [YuNet](https://github.com/ShiqiYu/libfacedetection/blob/master/LICENSE), [Yolo](https://github.com/derronqi/yolov8-face/blob/main/LICENSE) and [CenterFace](https://github.com/Star-Clouds/CenterFace/blob/master/LICENSE). License types will be inherited when you intend to utilize those models. Please check the license types of those models for production purposes.
376
-
377
- DeepFace [logo](https://thenounproject.com/term/face-recognition/2965879/) is created by [Adrien Coquet](https://thenounproject.com/coquet_adrien/) and it is licensed under [Creative Commons: By Attribution 3.0 License](https://creativecommons.org/licenses/by/3.0/).
 
1
+ ---
2
+ # For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
3
+ # Doc / guide: https://huggingface.co/docs/hub/model-cards
4
+ {}
5
+ ---
6
 
7
+ # Model Card for Model ID
8
 
9
+ <!-- Provide a quick summary of what the model is/does. -->
 
 
 
 
10
 
11
+ This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
 
 
 
 
12
 
13
+ ## Model Details
 
14
 
15
+ ### Model Description
16
 
17
+ <!-- Provide a longer summary of what this model is. -->
18
 
 
19
 
 
20
 
21
+ - **Developed by:** [More Information Needed]
22
+ - **Funded by [optional]:** [More Information Needed]
23
+ - **Shared by [optional]:** [More Information Needed]
24
+ - **Model type:** [More Information Needed]
25
+ - **Language(s) (NLP):** [More Information Needed]
26
+ - **License:** [More Information Needed]
27
+ - **Finetuned from model [optional]:** [More Information Needed]
28
 
29
+ ### Model Sources [optional]
30
 
31
+ <!-- Provide the basic links for the model. -->
 
 
32
 
33
+ - **Repository:** [More Information Needed]
34
+ - **Paper [optional]:** [More Information Needed]
35
+ - **Demo [optional]:** [More Information Needed]
36
 
37
+ ## Uses
 
 
38
 
39
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
40
 
41
+ ### Direct Use
 
 
 
 
42
 
43
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
44
 
45
+ [More Information Needed]
 
 
46
 
47
+ ### Downstream Use [optional]
48
 
49
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
50
 
51
+ [More Information Needed]
52
 
53
+ ### Out-of-Scope Use
54
 
55
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
 
 
56
 
57
+ [More Information Needed]
58
 
59
+ ## Bias, Risks, and Limitations
60
 
61
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
62
 
63
+ [More Information Needed]
64
 
65
+ ### Recommendations
 
 
66
 
67
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
68
 
69
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
70
 
71
+ ## How to Get Started with the Model
72
 
73
+ Use the code below to get started with the model.
 
 
74
 
75
+ [More Information Needed]
76
 
77
+ ## Training Details
 
 
 
 
78
 
79
+ ### Training Data
80
 
81
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
82
 
83
+ [More Information Needed]
84
 
85
+ ### Training Procedure
86
 
87
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
 
 
 
 
 
 
 
 
 
 
 
 
88
 
89
+ #### Preprocessing [optional]
 
 
 
 
90
 
91
+ [More Information Needed]
 
 
 
 
92
 
 
 
 
 
 
93
 
94
+ #### Training Hyperparameters
95
 
96
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
97
 
98
+ #### Speeds, Sizes, Times [optional]
 
 
 
 
 
 
 
 
 
 
 
99
 
100
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
101
 
102
+ [More Information Needed]
103
 
104
+ ## Evaluation
105
 
106
+ <!-- This section describes the evaluation protocols and provides the results. -->
107
 
108
+ ### Testing Data, Factors & Metrics
 
109
 
110
+ #### Testing Data
 
 
 
 
111
 
112
+ <!-- This should link to a Dataset Card if possible. -->
 
 
 
 
 
113
 
114
+ [More Information Needed]
115
 
116
+ #### Factors
117
 
118
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
119
 
120
+ [More Information Needed]
 
 
 
 
121
 
122
+ #### Metrics
123
 
124
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
125
 
126
+ [More Information Needed]
127
 
128
+ ### Results
129
 
130
+ [More Information Needed]
131
 
132
+ #### Summary
133
 
 
134
 
 
 
 
 
 
 
 
 
 
 
 
 
 
135
 
136
+ ## Model Examination [optional]
 
 
 
 
137
 
138
+ <!-- Relevant interpretability work for the model goes here -->
 
 
 
 
139
 
140
+ [More Information Needed]
 
 
 
141
 
142
+ ## Environmental Impact
 
 
 
143
 
144
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
 
 
 
 
145
 
146
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
147
 
148
+ - **Hardware Type:** [More Information Needed]
149
+ - **Hours used:** [More Information Needed]
150
+ - **Cloud Provider:** [More Information Needed]
151
+ - **Compute Region:** [More Information Needed]
152
+ - **Carbon Emitted:** [More Information Needed]
153
 
154
+ ## Technical Specifications [optional]
155
 
156
+ ### Model Architecture and Objective
157
 
158
+ [More Information Needed]
 
 
159
 
160
+ ### Compute Infrastructure
161
 
162
+ [More Information Needed]
163
 
164
+ #### Hardware
165
 
166
+ [More Information Needed]
 
 
167
 
168
+ #### Software
169
 
170
+ [More Information Needed]
171
 
172
+ ## Citation [optional]
 
 
 
 
 
 
 
 
173
 
174
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
175
 
176
+ **BibTeX:**
177
 
178
+ [More Information Needed]
 
 
 
179
 
180
+ **APA:**
181
 
182
+ [More Information Needed]
183
 
184
+ ## Glossary [optional]
185
 
186
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
187
 
188
+ [More Information Needed]
 
 
 
189
 
190
+ ## More Information [optional]
191
 
192
+ [More Information Needed]
193
 
194
+ ## Model Card Authors [optional]
195
 
196
+ [More Information Needed]
 
 
197
 
198
+ ## Model Card Contact
 
 
199
 
200
+ [More Information Needed]