Update README.md
Browse files
README.md
CHANGED
|
@@ -10,15 +10,14 @@ license:
|
|
| 10 |
multilinguality:
|
| 11 |
- monolingual
|
| 12 |
size_categories:
|
| 13 |
-
-
|
| 14 |
source_datasets:
|
| 15 |
- extended|other-wider
|
| 16 |
task_categories:
|
| 17 |
- object-detection
|
| 18 |
task_ids:
|
| 19 |
- face-detection
|
| 20 |
-
|
| 21 |
-
pretty_name: WIDER FACE
|
| 22 |
---
|
| 23 |
|
| 24 |
# Dataset Card for WIDER FACE
|
|
@@ -32,7 +31,6 @@ pretty_name: WIDER FACE
|
|
| 32 |
- [Dataset Structure](#dataset-structure)
|
| 33 |
- [Data Instances](#data-instances)
|
| 34 |
- [Data Fields](#data-fields)
|
| 35 |
-
- [Data Splits](#data-splits)
|
| 36 |
- [Dataset Creation](#dataset-creation)
|
| 37 |
- [Curation Rationale](#curation-rationale)
|
| 38 |
- [Source Data](#source-data)
|
|
@@ -50,11 +48,10 @@ pretty_name: WIDER FACE
|
|
| 50 |
|
| 51 |
## Dataset Description
|
| 52 |
|
| 53 |
-
- **Homepage:**
|
| 54 |
- **Repository:**
|
| 55 |
-
- **Paper:** [
|
| 56 |
-
- **
|
| 57 |
-
- **Point of Contact:** shuoyang.1213@gmail.com
|
| 58 |
|
| 59 |
### Dataset Summary
|
| 60 |
|
|
@@ -86,19 +83,13 @@ A data point comprises an image and its face annotations.
|
|
| 86 |
{
|
| 87 |
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=1024x755 at 0x19FA12186D8>, 'faces': {
|
| 88 |
'bbox': [
|
| 89 |
-
[
|
| 90 |
-
[
|
| 91 |
-
[
|
| 92 |
-
[
|
| 93 |
-
[
|
| 94 |
-
[
|
| 95 |
-
]
|
| 96 |
-
'blur': [2, 2, 2, 2, 2, 2],
|
| 97 |
-
'expression': [0, 0, 0, 0, 0, 0],
|
| 98 |
-
'illumination': [0, 0, 0, 0, 0, 0],
|
| 99 |
-
'occlusion': [1, 2, 1, 2, 1, 2],
|
| 100 |
-
'pose': [0, 0, 0, 0, 0, 0],
|
| 101 |
-
'invalid': [False, False, False, False, False, False]
|
| 102 |
}
|
| 103 |
}
|
| 104 |
```
|
|
@@ -107,19 +98,7 @@ A data point comprises an image and its face annotations.
|
|
| 107 |
|
| 108 |
- `image`: A `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`
|
| 109 |
- `faces`: a dictionary of face attributes for the faces present on the image
|
| 110 |
-
- `bbox`: the bounding box of each face (in the [
|
| 111 |
-
- `blur`: the blur level of each face, with possible values including `clear` (0), `normal` (1) and `heavy`
|
| 112 |
-
- `expression`: the facial expression of each face, with possible values including `typical` (0) and `exaggerate` (1)
|
| 113 |
-
- `illumination`: the lightning condition of each face, with possible values including `normal` (0) and `exaggerate` (1)
|
| 114 |
-
- `occlusion`: the level of occlusion of each face, with possible values including `no` (0), `partial` (1) and `heavy` (2)
|
| 115 |
-
- `pose`: the pose of each face, with possible values including `typical` (0) and `atypical` (1)
|
| 116 |
-
- `invalid`: whether the image is valid or invalid.
|
| 117 |
-
|
| 118 |
-
### Data Splits
|
| 119 |
-
|
| 120 |
-
The data is split into training, validation and testing set. WIDER FACE dataset is organized
|
| 121 |
-
based on 61 event classes. For each event class, 40%/10%/50%
|
| 122 |
-
data is randomly selected as training, validation and testing sets. The training set contains 12880 images, the validation set 3226 images and test set 16097 images.
|
| 123 |
|
| 124 |
## Dataset Creation
|
| 125 |
|
|
@@ -133,16 +112,7 @@ with heavy occlusion, small scale, and atypical pose.
|
|
| 133 |
|
| 134 |
#### Initial Data Collection and Normalization
|
| 135 |
|
| 136 |
-
|
| 137 |
-
The images in WIDER were collected in the following three steps: 1) Event categories
|
| 138 |
-
were defined and chosen following the Large Scale Ontology for Multimedia (LSCOM) [22], which provides around 1000 concepts relevant to video event analysis. 2) Images
|
| 139 |
-
are retrieved using search engines like Google and Bing. For
|
| 140 |
-
each category, 1000-3000 images were collected. 3) The
|
| 141 |
-
data were cleaned by manually examining all the images
|
| 142 |
-
and filtering out images without human face. Then, similar
|
| 143 |
-
images in each event category were removed to ensure large
|
| 144 |
-
diversity in face appearance. A total of 32203 images are
|
| 145 |
-
eventually included in the WIDER FACE dataset.
|
| 146 |
|
| 147 |
#### Who are the source language producers?
|
| 148 |
|
|
@@ -152,18 +122,11 @@ The images are selected from publicly available WIDER dataset.
|
|
| 152 |
|
| 153 |
#### Annotation process
|
| 154 |
|
| 155 |
-
|
| 156 |
-
the recognizable faces in the WIDER FACE dataset. The
|
| 157 |
-
bounding box is required to tightly contain the forehead,
|
| 158 |
-
chin, and cheek.. If a face is occluded, they still label it with a bounding box but with an estimation on the scale of occlusion. Similar to the PASCAL VOC dataset [6], they assign an ’Ignore’ flag to the face
|
| 159 |
-
which is very difficult to be recognized due to low resolution and small scale (10 pixels or less). After annotating
|
| 160 |
-
the face bounding boxes, they further annotate the following
|
| 161 |
-
attributes: pose (typical, atypical) and occlusion level (partial, heavy). Each annotation is labeled by one annotator
|
| 162 |
-
and cross-checked by two different people.
|
| 163 |
|
| 164 |
#### Who are the annotators?
|
| 165 |
|
| 166 |
-
|
| 167 |
|
| 168 |
### Personal and Sensitive Information
|
| 169 |
|
|
@@ -187,7 +150,7 @@ Shuo Yang, Ping Luo, Chen Change Loy and Xiaoou Tang.
|
|
| 187 |
|
| 188 |
### Dataset Curators
|
| 189 |
|
| 190 |
-
|
| 191 |
|
| 192 |
### Licensing Information
|
| 193 |
|
|
@@ -196,14 +159,15 @@ Shuo Yang, Ping Luo, Chen Change Loy and Xiaoou Tang
|
|
| 196 |
### Citation Information
|
| 197 |
|
| 198 |
```
|
| 199 |
-
@
|
| 200 |
-
|
| 201 |
-
|
| 202 |
-
|
| 203 |
-
|
|
|
|
| 204 |
```
|
| 205 |
|
| 206 |
### Contributions
|
| 207 |
|
| 208 |
-
Thanks to [@
|
| 209 |
|
|
|
|
| 10 |
multilinguality:
|
| 11 |
- monolingual
|
| 12 |
size_categories:
|
| 13 |
+
- 1K<n<10K
|
| 14 |
source_datasets:
|
| 15 |
- extended|other-wider
|
| 16 |
task_categories:
|
| 17 |
- object-detection
|
| 18 |
task_ids:
|
| 19 |
- face-detection
|
| 20 |
+
pretty_name: PP4AV
|
|
|
|
| 21 |
---
|
| 22 |
|
| 23 |
# Dataset Card for WIDER FACE
|
|
|
|
| 31 |
- [Dataset Structure](#dataset-structure)
|
| 32 |
- [Data Instances](#data-instances)
|
| 33 |
- [Data Fields](#data-fields)
|
|
|
|
| 34 |
- [Dataset Creation](#dataset-creation)
|
| 35 |
- [Curation Rationale](#curation-rationale)
|
| 36 |
- [Source Data](#source-data)
|
|
|
|
| 48 |
|
| 49 |
## Dataset Description
|
| 50 |
|
| 51 |
+
- **Homepage:** https://github.com/khaclinh/pp4av
|
| 52 |
- **Repository:**
|
| 53 |
+
- **Paper:** [PP4AV: A benchmarking Dataset for Privacy-preserving Autonomous Driving]
|
| 54 |
+
- **Point of Contact:** linhtk.dhbk@gmail.com
|
|
|
|
| 55 |
|
| 56 |
### Dataset Summary
|
| 57 |
|
|
|
|
| 83 |
{
|
| 84 |
'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=1024x755 at 0x19FA12186D8>, 'faces': {
|
| 85 |
'bbox': [
|
| 86 |
+
[0.230078, 0.317081, 0.239062, 0.331367],
|
| 87 |
+
[0.5017185, 0.0306425, 0.5185935, 0.0410975],
|
| 88 |
+
[0.695078, 0.0710145, 0.7109375, 0.0863355],
|
| 89 |
+
[0.4089065, 0.31646, 0.414375, 0.32764],
|
| 90 |
+
[0.1843745, 0.403416, 0.201093, 0.414182],
|
| 91 |
+
[0.71325, 0.3393474, 0.717922, 0.3514285]
|
| 92 |
+
]
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 93 |
}
|
| 94 |
}
|
| 95 |
```
|
|
|
|
| 98 |
|
| 99 |
- `image`: A `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]`
|
| 100 |
- `faces`: a dictionary of face attributes for the faces present on the image
|
| 101 |
+
- `bbox`: the bounding box of each face (in the [yolo](https://albumentations.ai/docs/getting_started/bounding_boxes_augmentation/#yolo) format)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 102 |
|
| 103 |
## Dataset Creation
|
| 104 |
|
|
|
|
| 112 |
|
| 113 |
#### Initial Data Collection and Normalization
|
| 114 |
|
| 115 |
+
The objective of PP4AV is to build a benchmark dataset that can be used to evaluate face and license plate detection models for autonomous driving. For normal camera data, we sampled images from the existing videos in which cameras were mounted in moving vehicles, running around the European cities. We focus on sampling data in urban areas rather than highways in order to provide sufficient samples of license plates and pedestrians. The images in PP4AV were sampled from **6** European cities at various times of day, including nighttime. We use the fisheye images from the WoodScape dataset to select **244** images from the front, rear, left, and right cameras for fisheye camera data. In total, **3,447** images were selected and annotated in PP4AV.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 116 |
|
| 117 |
#### Who are the source language producers?
|
| 118 |
|
|
|
|
| 122 |
|
| 123 |
#### Annotation process
|
| 124 |
|
| 125 |
+
Annotators annotate facial and license plate objects in images. For facial objects, bounding boxes are defined by all detectable human faces from the forehead to the chin to the ears. Faces were labelled with diverse sizes, skin tones, and faces partially obscured by a transparent material, such as a car windshield. For license plate objects, bounding boxes consists of all recognizable license plates with high variability, such as different sizes, countries, vehicle types (motorcycle, automobile, bus, truck), and occlusions by other vehicles. License plates were annotated for vehicles involved in moving traffic. To ensure the quality of annotation, there are two-step process for annotation. In the first phase, two teams of annotators will independently annotate identical image sets. After their annotation output is complete, a merging method based on the IoU scores between the two bounding boxes of the two annotations will be applied. Pairs of annotations with IoU scores above a threshold will be merged and saved as a single annotation. Annotated pairs with IoU scores below a threshold will be considered conflicting. In the second phase, two teams of reviewers will inspect the conflicting pairs of annotations for revision before a second merging method similar to the first is applied. The results of these two phases will be combined to form the final annotation. All work is conducted on the CVAT tool https://github.com/openvinotoolkit/cvat.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 126 |
|
| 127 |
#### Who are the annotators?
|
| 128 |
|
| 129 |
+
Vantix Data Science team
|
| 130 |
|
| 131 |
### Personal and Sensitive Information
|
| 132 |
|
|
|
|
| 150 |
|
| 151 |
### Dataset Curators
|
| 152 |
|
| 153 |
+
Linh Trinh
|
| 154 |
|
| 155 |
### Licensing Information
|
| 156 |
|
|
|
|
| 159 |
### Citation Information
|
| 160 |
|
| 161 |
```
|
| 162 |
+
@article{PP4AV2022,
|
| 163 |
+
title = {PP4AV: A benchmarking Dataset for Privacy-preserving Autonomous Driving},
|
| 164 |
+
author = {Linh Trinh, Phuong Pham, Hoang Trinh, Nguyen Bach, Dung Nguyen, Giang Nguyen, Huy Nguyen},
|
| 165 |
+
booktitle = {IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)},
|
| 166 |
+
year = {2023}
|
| 167 |
+
}
|
| 168 |
```
|
| 169 |
|
| 170 |
### Contributions
|
| 171 |
|
| 172 |
+
Thanks to [@khaclinh](https://github.com/khaclinh) for adding this dataset.
|
| 173 |
|