ynkuai commited on
Commit
e28b6bb
·
verified ·
1 Parent(s): 7b16446

Upload 33 files

Browse files
script/README.md ADDED
@@ -0,0 +1,58 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ## **Installation**
2
+
3
+ - Install PyTorch
4
+ - Install required Python packages:
5
+
6
+ ```bash
7
+ pip install datasets
8
+ pip install huggingface_hub
9
+ pip install ultralytics
10
+ ```
11
+
12
+ ## Basic usage: Run the Filtering on WIT-base
13
+
14
+ Run the Filtering with Command-Line Arguments
15
+
16
+
17
+ ```bash
18
+ python wit_filter.py --device cuda:0 --batch_size 32 --output_filtered_data_file_path /path/to/filtered_data_file.parquet
19
+ ```
20
+
21
+ - `--device`: Set to "cpu" if GPU is unavailable (default: cuda:0)
22
+ - `--batch_size`: Adjust based on your available memory (default: 32)
23
+ - `--output_filtered_data_file_path`: Path to save the filtered results (default: filtered_data_file.parquet)
24
+
25
+ The filtered dataset will be saved at the path specified by `--output_filtered_data_file_path`.
26
+
27
+ ## Evaluation Mode Usage: Evaluate Detection Performance on WIT-base Subset
28
+
29
+ A curated evaluation subset of 30 WIT-base images is included to evaluate the detection model performance.
30
+
31
+
32
+ To enable evaluation mode and save filtered images into category-specific folders, use the `--eval_mode` flag and specify the image directory:
33
+
34
+ ```bash
35
+ python wit_filter.py --device cuda:0 --batch_size 32 --output_filtered_data_file_path /path/to/filtered_data_file.parquet --eval_mode --filtered_image_dir path/to/image_filter_result_dir
36
+ ```
37
+
38
+ - `--eval_mode`: Enable evaluation mode to save filtered images into category-specific folders
39
+ - `--filtered_image_dir`: Directory where the filtered images will be saved (default: image_filter_result_dir)
40
+
41
+ Filtered images will be organized into subfolders under `filtered_image_dir`:
42
+
43
+ - `no_face/`: No valid face detected
44
+ - `valid_face_no_glasses/`: Valid face detected, no glasses
45
+ - `valid_face_with_eyeglasses/`: Valid face with eyeglasses
46
+ - `valid_face_with_sunglasses/`: Valid face with sunglasses
47
+
48
+ ### Information about the Evaluation Data
49
+
50
+ 📎 `wit_eval_30.csv`: Metadata for the evaluation set.
51
+
52
+ | Column | Description |
53
+ | --- | --- |
54
+ | `idx` | Index in the original WIT-base dataset |
55
+ | `has_face` | 0 = No face or too small, 1 = Valid face |
56
+ | `glasses_type` | 0 = No glasses, 1 = Eyeglasses, 2 = Sunglasses |
57
+
58
+ 📎 `data/`: Directory containing all 30 images in the evaluation subset.
script/data/1096.jpg ADDED

Git LFS Details

  • SHA256: 163ca490efd3c5d5e37eb424371bb91662846dc5630979e9ed6a7695643ab40a
  • Pointer size: 130 Bytes
  • Size of remote file: 13.2 kB
script/data/1116.jpg ADDED

Git LFS Details

  • SHA256: 2face770134c63c93bb5861ba7e9b9aa145a1d00d0b83ab12ebd0f80c3e89098
  • Pointer size: 130 Bytes
  • Size of remote file: 27 kB
script/data/1496.jpg ADDED

Git LFS Details

  • SHA256: 697919341db1bc8c7583e6cf64a8b8b870b88e65d4c73e582d3e25508e0e8f22
  • Pointer size: 129 Bytes
  • Size of remote file: 6.45 kB
script/data/1750.jpg ADDED

Git LFS Details

  • SHA256: c83de0b4e058f877727d5fbd8d8fcb6fe3a86b4af9d6fb83ae6ea4003f858374
  • Pointer size: 130 Bytes
  • Size of remote file: 22.1 kB
script/data/1763.jpg ADDED

Git LFS Details

  • SHA256: 89296ad4d639d9a5cd2bd51d71f6719309aa48bc88e7240f8418c9c157743374
  • Pointer size: 130 Bytes
  • Size of remote file: 14.5 kB
script/data/1818.jpg ADDED

Git LFS Details

  • SHA256: 67e81dbb252cac6ac7667859af5a54f0dd4d1d11b39663415246c03323037b43
  • Pointer size: 130 Bytes
  • Size of remote file: 12.6 kB
script/data/1952.jpg ADDED

Git LFS Details

  • SHA256: 595e73aecf8ac23d2e4d1e0da280167ae39a5942febcf1316e3de40227e1b6b7
  • Pointer size: 130 Bytes
  • Size of remote file: 18.9 kB
script/data/2246.jpg ADDED

Git LFS Details

  • SHA256: 289906bc1c4a2dd212ce757d8cd8aea5f5623a87b548ff74dbba6c763e5a6759
  • Pointer size: 130 Bytes
  • Size of remote file: 20.4 kB
script/data/2303.jpg ADDED

Git LFS Details

  • SHA256: 0da494326ce1ba09f6f701152c12222aefabeaea77181b5172dc52122787df55
  • Pointer size: 130 Bytes
  • Size of remote file: 18.3 kB
script/data/2518.jpg ADDED

Git LFS Details

  • SHA256: 80fb83d380a0bbfdde5c60a434091d893cdade890a5f7c1bb002e946f978e8df
  • Pointer size: 130 Bytes
  • Size of remote file: 12 kB
script/data/2687.jpg ADDED

Git LFS Details

  • SHA256: 558cce112411e801199a1f9796a40d4fedbcfbf10522785a1329c352d374c38f
  • Pointer size: 130 Bytes
  • Size of remote file: 41 kB
script/data/3088.jpg ADDED

Git LFS Details

  • SHA256: 18b343c47e72b7fd24f4a0c73578a97bd43f53de1fbd2cbc790a73561d5a5747
  • Pointer size: 130 Bytes
  • Size of remote file: 21.4 kB
script/data/3200.jpg ADDED

Git LFS Details

  • SHA256: f1355996fc3780048118c5958395c3f27cde489c126b2051564f439392ffdbdf
  • Pointer size: 130 Bytes
  • Size of remote file: 32.2 kB
script/data/3239.jpg ADDED

Git LFS Details

  • SHA256: 23ff893274d852b0d31205b2e2959603b7fa4b52a3db7a5e4627b01aa057638d
  • Pointer size: 130 Bytes
  • Size of remote file: 34.4 kB
script/data/3298.jpg ADDED

Git LFS Details

  • SHA256: e1dbd1de578a1faf51f92af8d1e6c21178cf07ee9a9d4c61fc7e4056bbab8a22
  • Pointer size: 130 Bytes
  • Size of remote file: 21.7 kB
script/data/3365.jpg ADDED

Git LFS Details

  • SHA256: aba23df13030083c88e5e95bc7645f5048300a67125dac3bf1b2e4d16cd65227
  • Pointer size: 130 Bytes
  • Size of remote file: 20.4 kB
script/data/3878.jpg ADDED

Git LFS Details

  • SHA256: 1501d892eadd656eec56a7aaeff02bd0ce8fb8a82bd8641ba12583a892f84ba8
  • Pointer size: 130 Bytes
  • Size of remote file: 13.4 kB
script/data/3923.jpg ADDED

Git LFS Details

  • SHA256: 7a8b1dc499db08bc515a2f4be286c1b49f0ffca620c7e1fb714b47066d1b60e4
  • Pointer size: 130 Bytes
  • Size of remote file: 32 kB
script/data/4596.jpg ADDED

Git LFS Details

  • SHA256: 658d2b8b063244047dbe9bbec88bafb3c0591e5e416a4b5da85b5f06b81a22be
  • Pointer size: 130 Bytes
  • Size of remote file: 34.7 kB
script/data/5393.jpg ADDED

Git LFS Details

  • SHA256: 264923ce891a6cdb6144f4f43726efe8e4b3a51a7a3d7ec4005e6fff47dfed87
  • Pointer size: 130 Bytes
  • Size of remote file: 25.2 kB
script/data/5401.jpg ADDED

Git LFS Details

  • SHA256: b878309048254101e6bf3506d840be81901151fd8e80aa2bec03f11d025a5f6e
  • Pointer size: 130 Bytes
  • Size of remote file: 31 kB
script/data/541.jpg ADDED

Git LFS Details

  • SHA256: 009dd3134bee67cb3be83236dacb992c155dce7040c8d1c10a7ad7a1b0a6df34
  • Pointer size: 130 Bytes
  • Size of remote file: 28.7 kB
script/data/5578.jpg ADDED

Git LFS Details

  • SHA256: 3257621f4651061c68308823c1d9675a944b1a605397cf20b93ed00b44163564
  • Pointer size: 130 Bytes
  • Size of remote file: 17.7 kB
script/data/5702.jpg ADDED

Git LFS Details

  • SHA256: 11aa8137b26051bb748a0724d5afdb04d125bdaaf971d4dd78da1317f868a196
  • Pointer size: 130 Bytes
  • Size of remote file: 23.3 kB
script/data/5754.jpg ADDED

Git LFS Details

  • SHA256: 42525790cc583440a5e99b770870fe731de2d9d48de2d769dc2318418e0d1143
  • Pointer size: 130 Bytes
  • Size of remote file: 20.9 kB
script/data/6754.jpg ADDED

Git LFS Details

  • SHA256: 11bb49b693b88dabe68e0739c5fa8f454d6dbb6d68c973a0de39e1dc7fcaf831
  • Pointer size: 130 Bytes
  • Size of remote file: 22.4 kB
script/data/7397.jpg ADDED

Git LFS Details

  • SHA256: bbdd41ad1adf53301a6c3f648ef7c69b069438225477b8f5f700231f849a32e0
  • Pointer size: 130 Bytes
  • Size of remote file: 21.5 kB
script/data/8879.jpg ADDED

Git LFS Details

  • SHA256: af23377339152a01540a040a6c13e29ced24b52d0fe89099d79176f050fa74c0
  • Pointer size: 130 Bytes
  • Size of remote file: 28 kB
script/data/960.jpg ADDED

Git LFS Details

  • SHA256: d5a3d0d8704ed7615cc7375d56056a0f4a318720aa1e18438b11392bf7337bf9
  • Pointer size: 130 Bytes
  • Size of remote file: 18.1 kB
script/data/990.jpg ADDED

Git LFS Details

  • SHA256: d502acd5a52479ef20c873a77a32e592c1ce70bfda7fa895ad98fd85917bcfc8
  • Pointer size: 130 Bytes
  • Size of remote file: 26.7 kB
script/wit_eval_30.csv ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ idx, has_face, glasses_type
2
+ 1496,0,0
3
+ 1750,0,0
4
+ 1818,0,0
5
+ 1952,0,0
6
+ 2303,0,0
7
+ 3088,0,0
8
+ 3365,0,0
9
+ 3878,0,0
10
+ 3923,0,0
11
+ 541,1,0
12
+ 960,1,0
13
+ 1096,1,0
14
+ 1763,1,0
15
+ 2518,1,0
16
+ 2687,1,0
17
+ 3200,1,0
18
+ 5393,1,0
19
+ 5702,1,0
20
+ 990,1,1
21
+ 2246,1,1
22
+ 3298,1,1
23
+ 4596,1,1
24
+ 5401,1,1
25
+ 5578,1,1
26
+ 5754,1,1
27
+ 7397,1,1
28
+ 8879,1,1
29
+ 1116,1,2
30
+ 3239,1,2
31
+ 6754,1,2
script/wit_filter.py ADDED
@@ -0,0 +1,282 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import torch
2
+ import math
3
+ import os
4
+ import argparse
5
+ import logging
6
+
7
+ from datasets import load_dataset, Features, Sequence, Value, Image
8
+ from huggingface_hub import hf_hub_download
9
+ from ultralytics import YOLO, YOLOWorld
10
+
11
+ def parse_args() -> argparse.Namespace:
12
+ """
13
+ Parse command-line arguments for the WIT Data Filtering System.
14
+
15
+ Returns:
16
+ argparse.Namespace: Parsed arguments.
17
+ """
18
+ parser = argparse.ArgumentParser(description="WIT Data Filtering System")
19
+ parser.add_argument('--device', type=str, default="cuda:0", help='Device to use for inference')
20
+ parser.add_argument('--batch_size', type=int, default=32, help='Batch size for processing')
21
+ parser.add_argument('--output_filtered_data_file_path', type=str, default="filtered_data_file.parquet", help='Path to save filtered data file')
22
+ parser.add_argument('--eval_mode', action='store_true', help='Enable evaluation mode')
23
+ parser.add_argument('--filtered_image_dir', type=str, default="image_filter_result_dir", help='Directory to save filtered images')
24
+ return parser.parse_args()
25
+
26
+ # Evaluation data index in original wit dataset.
27
+ eval_data_no_face = [1496, 1750, 1818, 1952, 2303, 3088, 3365, 3878, 3923]
28
+ eval_data_have_face_no_glasses = [541, 960, 1096, 1763, 2518, 2687, 3200, 5393, 5702]
29
+ eval_data_have_face_with_eyeglasses = [990, 2246, 3298, 4596, 5401, 5578, 5754, 7397, 8879]
30
+ eval_data_have_face_with_sunglasses = [1116, 3239, 6754]
31
+ eval_data_idx = eval_data_no_face + eval_data_have_face_no_glasses + eval_data_have_face_with_eyeglasses + eval_data_have_face_with_sunglasses
32
+
33
+ # YOLOv8-face-detection Model: detect face
34
+ def load_yolo_face_model(device: str) -> YOLO:
35
+ """
36
+ Load the YOLOv8 face detection model.
37
+
38
+ Args:
39
+ device (str): Device to load the model on (e.g., 'cuda:0' or 'cpu').
40
+
41
+ Returns:
42
+ YOLO: Loaded YOLO face detection model.
43
+ """
44
+ yolo_face_model_path = hf_hub_download(repo_id="arnabdhar/YOLOv8-Face-Detection", filename="model.pt")
45
+ return YOLO(yolo_face_model_path).to(device)
46
+
47
+ # YOLO-World Model: detect eyeglasses, sunglasses
48
+ def load_yolo_world_model(device: str) -> YOLOWorld:
49
+ """
50
+ Load the YOLO-World model for eyeglasses and sunglasses detection.
51
+
52
+ Args:
53
+ device (str): Device to load the model on (e.g., 'cuda:0' or 'cpu').
54
+
55
+ Returns:
56
+ YOLOWorld: Loaded YOLO-World model.
57
+ """
58
+ yolo_world_model = YOLOWorld("yolov8s-world.pt").to(device)
59
+ yolo_world_model.set_classes(["eyeglasses", "sunglasses"])
60
+ return yolo_world_model
61
+
62
+
63
+ def main() -> None:
64
+ """
65
+ Main function to run the WIT Data Filtering System. Handles argument parsing, model loading,
66
+ dataset loading, detection, filtering, and saving results.
67
+ """
68
+ args = parse_args()
69
+ device = args.device
70
+ batch_size = args.batch_size
71
+ output_filtered_data_file_path = os.path.abspath(os.path.expanduser(args.output_filtered_data_file_path))
72
+ eval_mode = args.eval_mode
73
+ filtered_image_dir = os.path.abspath(os.path.expanduser(args.filtered_image_dir))
74
+
75
+ # Path for saving the filtered images in evaluation.
76
+ img_dir_no_face = os.path.join(filtered_image_dir, "no_face")
77
+ img_dir_valid_face_no_glasses = os.path.join(filtered_image_dir, "valid_face_no_glasses")
78
+ img_dir_valid_face_with_eyeglasses = os.path.join(filtered_image_dir, "valid_face_with_eyeglasses")
79
+ img_dir_valid_face_with_sunglasses = os.path.join(filtered_image_dir, "valid_face_with_sunglasses")
80
+
81
+ save_filtered_image = eval_mode
82
+ # If the dataset is big, force the save_filtered_image to be `False` (will be set after loading dataset).
83
+
84
+ if save_filtered_image:
85
+ os.makedirs(img_dir_no_face, exist_ok=True)
86
+ os.makedirs(img_dir_valid_face_no_glasses, exist_ok=True)
87
+ os.makedirs(img_dir_valid_face_with_eyeglasses, exist_ok=True)
88
+ os.makedirs(img_dir_valid_face_with_sunglasses, exist_ok=True)
89
+
90
+ # Load models
91
+ yolo_face_model = load_yolo_face_model(device)
92
+ yolo_world_model = load_yolo_world_model(device)
93
+ face_yolo_threshold = 0.7
94
+ eyeglasses_yolo_threshold = 0.25
95
+ cls_idx_map = {"eyeglasses": 0, "sunglasses": 1}
96
+
97
+ def detect_face_and_eyeglasses(examples, idx):
98
+ """
99
+ Detect faces, eyeglasses, and sunglasses in a batch of images.
100
+
101
+ Args:
102
+ examples (Dict[str, Any]): Batch of examples from the dataset, containing images.
103
+ idx (List[int]): Indices of the images in the dataset.
104
+
105
+ Returns:
106
+ Dict[str, Any]: Detection results including image, glasses_score, glasses_box, face_score, face_box.
107
+ """
108
+ images = []
109
+ for i, image in zip(idx, examples["image"]):
110
+ try:
111
+ image = image.convert("RGB")
112
+ images.append(image)
113
+ except Exception as e:
114
+ logging.warning(f"Failed to load image at index {i}: {e}")
115
+ images.append(None)
116
+ continue
117
+ # Detect faces for the image batch
118
+ try:
119
+ results_face = yolo_face_model.predict(images, conf=face_yolo_threshold, device=device, verbose=False)
120
+ except Exception as e:
121
+ logging.error(f"Face model inference failed for batch: {e}")
122
+ # Return None for all images in this batch
123
+ return {
124
+ "image": images,
125
+ "glasses_score": [None]*len(images),
126
+ "glasses_box": [None]*len(images),
127
+ "face_score": [None]*len(images),
128
+ "face_box": [None]*len(images),
129
+ }
130
+
131
+ glasses_scores = []
132
+ glasses_boxes = []
133
+ face_scores = []
134
+ face_boxes = []
135
+ for i, image, result_face in zip(idx, images, results_face):
136
+ # Iterate across the face detection result for each image.
137
+ if image is None:
138
+ logging.warning(f"Skip unvalid image at index {i}")
139
+ glasses_scores.append(None)
140
+ glasses_boxes.append(None)
141
+ face_scores.append(None)
142
+ face_boxes.append(None)
143
+ continue
144
+
145
+ # 1. No face detected.
146
+ if len(result_face.boxes.cls) == 0:
147
+ glasses_scores.append(None)
148
+ glasses_boxes.append(None)
149
+ face_scores.append(None)
150
+ face_boxes.append(None)
151
+ if save_filtered_image:
152
+ image.save(f"{img_dir_no_face}/{i}.jpg")
153
+ continue
154
+
155
+ # 2. Face detected.
156
+ face_score = []
157
+ face_box = []
158
+ has_valid_face = False
159
+ # Filter the face detection results based on the bbox size.
160
+ for j in range(len(result_face.boxes.conf)):
161
+ # Iterate across the detected face bboxes in current image.
162
+ w, h = math.ceil(result_face.boxes.xywh[j, 2]), math.ceil(result_face.boxes.xywh[j, 3])
163
+ if w >= 100 and h >= 100:
164
+ has_valid_face = True
165
+
166
+ score = result_face.boxes.conf[j]
167
+ box_xyxy = [int(x) for x in result_face.boxes.xyxy[j].tolist()] # [x0, y0, x1, y1]
168
+ face_score.append(score)
169
+ face_box.append(box_xyxy)
170
+ else:
171
+ continue
172
+
173
+ # 3. Detected faces are all smaller than 100-px.
174
+ if not has_valid_face:
175
+ glasses_scores.append(None)
176
+ glasses_boxes.append(None)
177
+ face_scores.append(None)
178
+ face_boxes.append(None)
179
+ continue
180
+ else:
181
+ face_scores.append(torch.tensor(face_score))
182
+ face_boxes.append(torch.tensor(face_box))
183
+
184
+ # 4. Have at least one valid face.
185
+ # Detect eyeglasses and sunglasses for the single image with valid face.
186
+ try:
187
+ result_eyeglasses = yolo_world_model.predict(image, conf=eyeglasses_yolo_threshold, device=device, verbose=False)[0]
188
+ except Exception as e:
189
+ logging.error(f"Eyeglasses model inference failed at index {i}: {e}")
190
+ glasses_scores.append(None)
191
+ glasses_boxes.append(None)
192
+ continue
193
+ # 5. No eyeglasses detected.
194
+ if len(result_eyeglasses.boxes.cls) == 0:
195
+ glasses_scores.append(None)
196
+ glasses_boxes.append(None)
197
+ if save_filtered_image:
198
+ image.save(f"{img_dir_valid_face_no_glasses}/{i}.jpg")
199
+ continue
200
+
201
+ glasses_score = []
202
+ glasses_box = []
203
+ is_eyeglasses = True
204
+ for j in range(len(result_eyeglasses.boxes.conf)):
205
+ # Iterate across the detected glasses bboxes in current image.
206
+ category = result_eyeglasses.boxes.cls[j]
207
+ if category == cls_idx_map["eyeglasses"]:
208
+ score = result_eyeglasses.boxes.conf[j]
209
+ box_xyxy = [int(x) for x in result_eyeglasses.boxes.xyxy[j].tolist()] # [x0, y0, x1, y1]
210
+ glasses_score.append(score)
211
+ glasses_box.append(box_xyxy)
212
+ elif category == cls_idx_map["sunglasses"]:
213
+ is_eyeglasses = False
214
+ break
215
+
216
+ if not is_eyeglasses:
217
+ # 6. Sunglasses detected, drop the eyeglasses bbox.
218
+ glasses_scores.append(None)
219
+ glasses_boxes.append(None)
220
+ if save_filtered_image:
221
+ image.save(f"{img_dir_valid_face_with_sunglasses}/{i}.jpg")
222
+ else:
223
+ # 7. Sunglasses not detected, keep the eyeglasses bbox.
224
+ glasses_scores.append(torch.tensor(glasses_score)) # [n]
225
+ glasses_boxes.append(torch.tensor(glasses_box)) # [n, 4]
226
+ if save_filtered_image:
227
+ image.save(f"{img_dir_valid_face_with_eyeglasses}/{i}.jpg")
228
+
229
+ # No valid face: All of the four features are None.
230
+ # Valid face without eyeglasses: "face_score" and "face_box" has value. "glasses_score" and "glasses_box" are None.
231
+ # Valid face with eyeglasses: All of the four features are not None.
232
+ return {
233
+ "image": images,
234
+ "glasses_score": glasses_scores,
235
+ "glasses_box": glasses_boxes,
236
+ "face_score": face_scores,
237
+ "face_box": face_boxes,
238
+ }
239
+
240
+ # Load the first two shards of the wit-base dataset.
241
+ base_url = "https://huggingface.co/datasets/wikimedia/wit_base/resolve/main/data/"
242
+ data_files = {"train": [base_url + "train-00000-of-00330.parquet", base_url + "train-00001-of-00330.parquet"]}
243
+ wit = load_dataset("parquet", data_files=data_files, split="train", trust_remote_code=True).cast_column('image', Image())
244
+
245
+ # Select the curated subset for evaluation.
246
+ if eval_mode:
247
+ wit = wit.select(eval_data_idx)
248
+ save_filtered_image = True
249
+
250
+ # If the dataset is big, force the save_filtered_image to be `False`.
251
+ if len(wit) > 1000:
252
+ save_filtered_image = False
253
+
254
+ # Define new columns to store detection results.
255
+ features = {
256
+ "image": Image(),
257
+ "glasses_score": Sequence(feature=Value(dtype='float16', id=None), length=-1, id=None),
258
+ "glasses_box": Sequence(feature=Sequence(feature=Value(dtype='int16', id=None), length=-1, id=None), length=-1, id=None),
259
+ "face_score": Sequence(feature=Value(dtype='float16', id=None), length=-1, id=None),
260
+ "face_box": Sequence(feature=Sequence(feature=Value(dtype='int16', id=None), length=-1, id=None), length=-1, id=None)
261
+ }
262
+ # Delete unrelated columns.
263
+ remove_columns = wit.column_names
264
+ remove_columns.remove("image")
265
+ # Run the detection.
266
+ wit = wit.map(
267
+ detect_face_and_eyeglasses,
268
+ with_indices=True,
269
+ batched=True,
270
+ batch_size=batch_size,
271
+ features=Features(features),
272
+ remove_columns=remove_columns
273
+ )
274
+
275
+ # Filter the dataset based on detection result.
276
+ wit_filter = wit.filter(lambda example: example["glasses_score"])
277
+
278
+ # Save the filtered dataset as parquet file.
279
+ wit_filter.to_parquet(output_filtered_data_file_path)
280
+
281
+ if __name__ == "__main__":
282
+ main()