Datasets:
Upload 33 files
Browse files- script/README.md +58 -0
- script/data/1096.jpg +3 -0
- script/data/1116.jpg +3 -0
- script/data/1496.jpg +3 -0
- script/data/1750.jpg +3 -0
- script/data/1763.jpg +3 -0
- script/data/1818.jpg +3 -0
- script/data/1952.jpg +3 -0
- script/data/2246.jpg +3 -0
- script/data/2303.jpg +3 -0
- script/data/2518.jpg +3 -0
- script/data/2687.jpg +3 -0
- script/data/3088.jpg +3 -0
- script/data/3200.jpg +3 -0
- script/data/3239.jpg +3 -0
- script/data/3298.jpg +3 -0
- script/data/3365.jpg +3 -0
- script/data/3878.jpg +3 -0
- script/data/3923.jpg +3 -0
- script/data/4596.jpg +3 -0
- script/data/5393.jpg +3 -0
- script/data/5401.jpg +3 -0
- script/data/541.jpg +3 -0
- script/data/5578.jpg +3 -0
- script/data/5702.jpg +3 -0
- script/data/5754.jpg +3 -0
- script/data/6754.jpg +3 -0
- script/data/7397.jpg +3 -0
- script/data/8879.jpg +3 -0
- script/data/960.jpg +3 -0
- script/data/990.jpg +3 -0
- script/wit_eval_30.csv +31 -0
- script/wit_filter.py +282 -0
script/README.md
ADDED
|
@@ -0,0 +1,58 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
## **Installation**
|
| 2 |
+
|
| 3 |
+
- Install PyTorch
|
| 4 |
+
- Install required Python packages:
|
| 5 |
+
|
| 6 |
+
```bash
|
| 7 |
+
pip install datasets
|
| 8 |
+
pip install huggingface_hub
|
| 9 |
+
pip install ultralytics
|
| 10 |
+
```
|
| 11 |
+
|
| 12 |
+
## Basic usage: Run the Filtering on WIT-base
|
| 13 |
+
|
| 14 |
+
Run the Filtering with Command-Line Arguments
|
| 15 |
+
|
| 16 |
+
|
| 17 |
+
```bash
|
| 18 |
+
python wit_filter.py --device cuda:0 --batch_size 32 --output_filtered_data_file_path /path/to/filtered_data_file.parquet
|
| 19 |
+
```
|
| 20 |
+
|
| 21 |
+
- `--device`: Set to "cpu" if GPU is unavailable (default: cuda:0)
|
| 22 |
+
- `--batch_size`: Adjust based on your available memory (default: 32)
|
| 23 |
+
- `--output_filtered_data_file_path`: Path to save the filtered results (default: filtered_data_file.parquet)
|
| 24 |
+
|
| 25 |
+
The filtered dataset will be saved at the path specified by `--output_filtered_data_file_path`.
|
| 26 |
+
|
| 27 |
+
## Evaluation Mode Usage: Evaluate Detection Performance on WIT-base Subset
|
| 28 |
+
|
| 29 |
+
A curated evaluation subset of 30 WIT-base images is included to evaluate the detection model performance.
|
| 30 |
+
|
| 31 |
+
|
| 32 |
+
To enable evaluation mode and save filtered images into category-specific folders, use the `--eval_mode` flag and specify the image directory:
|
| 33 |
+
|
| 34 |
+
```bash
|
| 35 |
+
python wit_filter.py --device cuda:0 --batch_size 32 --output_filtered_data_file_path /path/to/filtered_data_file.parquet --eval_mode --filtered_image_dir path/to/image_filter_result_dir
|
| 36 |
+
```
|
| 37 |
+
|
| 38 |
+
- `--eval_mode`: Enable evaluation mode to save filtered images into category-specific folders
|
| 39 |
+
- `--filtered_image_dir`: Directory where the filtered images will be saved (default: image_filter_result_dir)
|
| 40 |
+
|
| 41 |
+
Filtered images will be organized into subfolders under `filtered_image_dir`:
|
| 42 |
+
|
| 43 |
+
- `no_face/`: No valid face detected
|
| 44 |
+
- `valid_face_no_glasses/`: Valid face detected, no glasses
|
| 45 |
+
- `valid_face_with_eyeglasses/`: Valid face with eyeglasses
|
| 46 |
+
- `valid_face_with_sunglasses/`: Valid face with sunglasses
|
| 47 |
+
|
| 48 |
+
### Information about the Evaluation Data
|
| 49 |
+
|
| 50 |
+
📎 `wit_eval_30.csv`: Metadata for the evaluation set.
|
| 51 |
+
|
| 52 |
+
| Column | Description |
|
| 53 |
+
| --- | --- |
|
| 54 |
+
| `idx` | Index in the original WIT-base dataset |
|
| 55 |
+
| `has_face` | 0 = No face or too small, 1 = Valid face |
|
| 56 |
+
| `glasses_type` | 0 = No glasses, 1 = Eyeglasses, 2 = Sunglasses |
|
| 57 |
+
|
| 58 |
+
📎 `data/`: Directory containing all 30 images in the evaluation subset.
|
script/data/1096.jpg
ADDED
|
Git LFS Details
|
script/data/1116.jpg
ADDED
|
Git LFS Details
|
script/data/1496.jpg
ADDED
|
Git LFS Details
|
script/data/1750.jpg
ADDED
|
Git LFS Details
|
script/data/1763.jpg
ADDED
|
Git LFS Details
|
script/data/1818.jpg
ADDED
|
Git LFS Details
|
script/data/1952.jpg
ADDED
|
Git LFS Details
|
script/data/2246.jpg
ADDED
|
Git LFS Details
|
script/data/2303.jpg
ADDED
|
Git LFS Details
|
script/data/2518.jpg
ADDED
|
Git LFS Details
|
script/data/2687.jpg
ADDED
|
Git LFS Details
|
script/data/3088.jpg
ADDED
|
Git LFS Details
|
script/data/3200.jpg
ADDED
|
Git LFS Details
|
script/data/3239.jpg
ADDED
|
Git LFS Details
|
script/data/3298.jpg
ADDED
|
Git LFS Details
|
script/data/3365.jpg
ADDED
|
Git LFS Details
|
script/data/3878.jpg
ADDED
|
Git LFS Details
|
script/data/3923.jpg
ADDED
|
Git LFS Details
|
script/data/4596.jpg
ADDED
|
Git LFS Details
|
script/data/5393.jpg
ADDED
|
Git LFS Details
|
script/data/5401.jpg
ADDED
|
Git LFS Details
|
script/data/541.jpg
ADDED
|
Git LFS Details
|
script/data/5578.jpg
ADDED
|
Git LFS Details
|
script/data/5702.jpg
ADDED
|
Git LFS Details
|
script/data/5754.jpg
ADDED
|
Git LFS Details
|
script/data/6754.jpg
ADDED
|
Git LFS Details
|
script/data/7397.jpg
ADDED
|
Git LFS Details
|
script/data/8879.jpg
ADDED
|
Git LFS Details
|
script/data/960.jpg
ADDED
|
Git LFS Details
|
script/data/990.jpg
ADDED
|
Git LFS Details
|
script/wit_eval_30.csv
ADDED
|
@@ -0,0 +1,31 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
idx, has_face, glasses_type
|
| 2 |
+
1496,0,0
|
| 3 |
+
1750,0,0
|
| 4 |
+
1818,0,0
|
| 5 |
+
1952,0,0
|
| 6 |
+
2303,0,0
|
| 7 |
+
3088,0,0
|
| 8 |
+
3365,0,0
|
| 9 |
+
3878,0,0
|
| 10 |
+
3923,0,0
|
| 11 |
+
541,1,0
|
| 12 |
+
960,1,0
|
| 13 |
+
1096,1,0
|
| 14 |
+
1763,1,0
|
| 15 |
+
2518,1,0
|
| 16 |
+
2687,1,0
|
| 17 |
+
3200,1,0
|
| 18 |
+
5393,1,0
|
| 19 |
+
5702,1,0
|
| 20 |
+
990,1,1
|
| 21 |
+
2246,1,1
|
| 22 |
+
3298,1,1
|
| 23 |
+
4596,1,1
|
| 24 |
+
5401,1,1
|
| 25 |
+
5578,1,1
|
| 26 |
+
5754,1,1
|
| 27 |
+
7397,1,1
|
| 28 |
+
8879,1,1
|
| 29 |
+
1116,1,2
|
| 30 |
+
3239,1,2
|
| 31 |
+
6754,1,2
|
script/wit_filter.py
ADDED
|
@@ -0,0 +1,282 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import torch
|
| 2 |
+
import math
|
| 3 |
+
import os
|
| 4 |
+
import argparse
|
| 5 |
+
import logging
|
| 6 |
+
|
| 7 |
+
from datasets import load_dataset, Features, Sequence, Value, Image
|
| 8 |
+
from huggingface_hub import hf_hub_download
|
| 9 |
+
from ultralytics import YOLO, YOLOWorld
|
| 10 |
+
|
| 11 |
+
def parse_args() -> argparse.Namespace:
|
| 12 |
+
"""
|
| 13 |
+
Parse command-line arguments for the WIT Data Filtering System.
|
| 14 |
+
|
| 15 |
+
Returns:
|
| 16 |
+
argparse.Namespace: Parsed arguments.
|
| 17 |
+
"""
|
| 18 |
+
parser = argparse.ArgumentParser(description="WIT Data Filtering System")
|
| 19 |
+
parser.add_argument('--device', type=str, default="cuda:0", help='Device to use for inference')
|
| 20 |
+
parser.add_argument('--batch_size', type=int, default=32, help='Batch size for processing')
|
| 21 |
+
parser.add_argument('--output_filtered_data_file_path', type=str, default="filtered_data_file.parquet", help='Path to save filtered data file')
|
| 22 |
+
parser.add_argument('--eval_mode', action='store_true', help='Enable evaluation mode')
|
| 23 |
+
parser.add_argument('--filtered_image_dir', type=str, default="image_filter_result_dir", help='Directory to save filtered images')
|
| 24 |
+
return parser.parse_args()
|
| 25 |
+
|
| 26 |
+
# Evaluation data index in original wit dataset.
|
| 27 |
+
eval_data_no_face = [1496, 1750, 1818, 1952, 2303, 3088, 3365, 3878, 3923]
|
| 28 |
+
eval_data_have_face_no_glasses = [541, 960, 1096, 1763, 2518, 2687, 3200, 5393, 5702]
|
| 29 |
+
eval_data_have_face_with_eyeglasses = [990, 2246, 3298, 4596, 5401, 5578, 5754, 7397, 8879]
|
| 30 |
+
eval_data_have_face_with_sunglasses = [1116, 3239, 6754]
|
| 31 |
+
eval_data_idx = eval_data_no_face + eval_data_have_face_no_glasses + eval_data_have_face_with_eyeglasses + eval_data_have_face_with_sunglasses
|
| 32 |
+
|
| 33 |
+
# YOLOv8-face-detection Model: detect face
|
| 34 |
+
def load_yolo_face_model(device: str) -> YOLO:
|
| 35 |
+
"""
|
| 36 |
+
Load the YOLOv8 face detection model.
|
| 37 |
+
|
| 38 |
+
Args:
|
| 39 |
+
device (str): Device to load the model on (e.g., 'cuda:0' or 'cpu').
|
| 40 |
+
|
| 41 |
+
Returns:
|
| 42 |
+
YOLO: Loaded YOLO face detection model.
|
| 43 |
+
"""
|
| 44 |
+
yolo_face_model_path = hf_hub_download(repo_id="arnabdhar/YOLOv8-Face-Detection", filename="model.pt")
|
| 45 |
+
return YOLO(yolo_face_model_path).to(device)
|
| 46 |
+
|
| 47 |
+
# YOLO-World Model: detect eyeglasses, sunglasses
|
| 48 |
+
def load_yolo_world_model(device: str) -> YOLOWorld:
|
| 49 |
+
"""
|
| 50 |
+
Load the YOLO-World model for eyeglasses and sunglasses detection.
|
| 51 |
+
|
| 52 |
+
Args:
|
| 53 |
+
device (str): Device to load the model on (e.g., 'cuda:0' or 'cpu').
|
| 54 |
+
|
| 55 |
+
Returns:
|
| 56 |
+
YOLOWorld: Loaded YOLO-World model.
|
| 57 |
+
"""
|
| 58 |
+
yolo_world_model = YOLOWorld("yolov8s-world.pt").to(device)
|
| 59 |
+
yolo_world_model.set_classes(["eyeglasses", "sunglasses"])
|
| 60 |
+
return yolo_world_model
|
| 61 |
+
|
| 62 |
+
|
| 63 |
+
def main() -> None:
|
| 64 |
+
"""
|
| 65 |
+
Main function to run the WIT Data Filtering System. Handles argument parsing, model loading,
|
| 66 |
+
dataset loading, detection, filtering, and saving results.
|
| 67 |
+
"""
|
| 68 |
+
args = parse_args()
|
| 69 |
+
device = args.device
|
| 70 |
+
batch_size = args.batch_size
|
| 71 |
+
output_filtered_data_file_path = os.path.abspath(os.path.expanduser(args.output_filtered_data_file_path))
|
| 72 |
+
eval_mode = args.eval_mode
|
| 73 |
+
filtered_image_dir = os.path.abspath(os.path.expanduser(args.filtered_image_dir))
|
| 74 |
+
|
| 75 |
+
# Path for saving the filtered images in evaluation.
|
| 76 |
+
img_dir_no_face = os.path.join(filtered_image_dir, "no_face")
|
| 77 |
+
img_dir_valid_face_no_glasses = os.path.join(filtered_image_dir, "valid_face_no_glasses")
|
| 78 |
+
img_dir_valid_face_with_eyeglasses = os.path.join(filtered_image_dir, "valid_face_with_eyeglasses")
|
| 79 |
+
img_dir_valid_face_with_sunglasses = os.path.join(filtered_image_dir, "valid_face_with_sunglasses")
|
| 80 |
+
|
| 81 |
+
save_filtered_image = eval_mode
|
| 82 |
+
# If the dataset is big, force the save_filtered_image to be `False` (will be set after loading dataset).
|
| 83 |
+
|
| 84 |
+
if save_filtered_image:
|
| 85 |
+
os.makedirs(img_dir_no_face, exist_ok=True)
|
| 86 |
+
os.makedirs(img_dir_valid_face_no_glasses, exist_ok=True)
|
| 87 |
+
os.makedirs(img_dir_valid_face_with_eyeglasses, exist_ok=True)
|
| 88 |
+
os.makedirs(img_dir_valid_face_with_sunglasses, exist_ok=True)
|
| 89 |
+
|
| 90 |
+
# Load models
|
| 91 |
+
yolo_face_model = load_yolo_face_model(device)
|
| 92 |
+
yolo_world_model = load_yolo_world_model(device)
|
| 93 |
+
face_yolo_threshold = 0.7
|
| 94 |
+
eyeglasses_yolo_threshold = 0.25
|
| 95 |
+
cls_idx_map = {"eyeglasses": 0, "sunglasses": 1}
|
| 96 |
+
|
| 97 |
+
def detect_face_and_eyeglasses(examples, idx):
|
| 98 |
+
"""
|
| 99 |
+
Detect faces, eyeglasses, and sunglasses in a batch of images.
|
| 100 |
+
|
| 101 |
+
Args:
|
| 102 |
+
examples (Dict[str, Any]): Batch of examples from the dataset, containing images.
|
| 103 |
+
idx (List[int]): Indices of the images in the dataset.
|
| 104 |
+
|
| 105 |
+
Returns:
|
| 106 |
+
Dict[str, Any]: Detection results including image, glasses_score, glasses_box, face_score, face_box.
|
| 107 |
+
"""
|
| 108 |
+
images = []
|
| 109 |
+
for i, image in zip(idx, examples["image"]):
|
| 110 |
+
try:
|
| 111 |
+
image = image.convert("RGB")
|
| 112 |
+
images.append(image)
|
| 113 |
+
except Exception as e:
|
| 114 |
+
logging.warning(f"Failed to load image at index {i}: {e}")
|
| 115 |
+
images.append(None)
|
| 116 |
+
continue
|
| 117 |
+
# Detect faces for the image batch
|
| 118 |
+
try:
|
| 119 |
+
results_face = yolo_face_model.predict(images, conf=face_yolo_threshold, device=device, verbose=False)
|
| 120 |
+
except Exception as e:
|
| 121 |
+
logging.error(f"Face model inference failed for batch: {e}")
|
| 122 |
+
# Return None for all images in this batch
|
| 123 |
+
return {
|
| 124 |
+
"image": images,
|
| 125 |
+
"glasses_score": [None]*len(images),
|
| 126 |
+
"glasses_box": [None]*len(images),
|
| 127 |
+
"face_score": [None]*len(images),
|
| 128 |
+
"face_box": [None]*len(images),
|
| 129 |
+
}
|
| 130 |
+
|
| 131 |
+
glasses_scores = []
|
| 132 |
+
glasses_boxes = []
|
| 133 |
+
face_scores = []
|
| 134 |
+
face_boxes = []
|
| 135 |
+
for i, image, result_face in zip(idx, images, results_face):
|
| 136 |
+
# Iterate across the face detection result for each image.
|
| 137 |
+
if image is None:
|
| 138 |
+
logging.warning(f"Skip unvalid image at index {i}")
|
| 139 |
+
glasses_scores.append(None)
|
| 140 |
+
glasses_boxes.append(None)
|
| 141 |
+
face_scores.append(None)
|
| 142 |
+
face_boxes.append(None)
|
| 143 |
+
continue
|
| 144 |
+
|
| 145 |
+
# 1. No face detected.
|
| 146 |
+
if len(result_face.boxes.cls) == 0:
|
| 147 |
+
glasses_scores.append(None)
|
| 148 |
+
glasses_boxes.append(None)
|
| 149 |
+
face_scores.append(None)
|
| 150 |
+
face_boxes.append(None)
|
| 151 |
+
if save_filtered_image:
|
| 152 |
+
image.save(f"{img_dir_no_face}/{i}.jpg")
|
| 153 |
+
continue
|
| 154 |
+
|
| 155 |
+
# 2. Face detected.
|
| 156 |
+
face_score = []
|
| 157 |
+
face_box = []
|
| 158 |
+
has_valid_face = False
|
| 159 |
+
# Filter the face detection results based on the bbox size.
|
| 160 |
+
for j in range(len(result_face.boxes.conf)):
|
| 161 |
+
# Iterate across the detected face bboxes in current image.
|
| 162 |
+
w, h = math.ceil(result_face.boxes.xywh[j, 2]), math.ceil(result_face.boxes.xywh[j, 3])
|
| 163 |
+
if w >= 100 and h >= 100:
|
| 164 |
+
has_valid_face = True
|
| 165 |
+
|
| 166 |
+
score = result_face.boxes.conf[j]
|
| 167 |
+
box_xyxy = [int(x) for x in result_face.boxes.xyxy[j].tolist()] # [x0, y0, x1, y1]
|
| 168 |
+
face_score.append(score)
|
| 169 |
+
face_box.append(box_xyxy)
|
| 170 |
+
else:
|
| 171 |
+
continue
|
| 172 |
+
|
| 173 |
+
# 3. Detected faces are all smaller than 100-px.
|
| 174 |
+
if not has_valid_face:
|
| 175 |
+
glasses_scores.append(None)
|
| 176 |
+
glasses_boxes.append(None)
|
| 177 |
+
face_scores.append(None)
|
| 178 |
+
face_boxes.append(None)
|
| 179 |
+
continue
|
| 180 |
+
else:
|
| 181 |
+
face_scores.append(torch.tensor(face_score))
|
| 182 |
+
face_boxes.append(torch.tensor(face_box))
|
| 183 |
+
|
| 184 |
+
# 4. Have at least one valid face.
|
| 185 |
+
# Detect eyeglasses and sunglasses for the single image with valid face.
|
| 186 |
+
try:
|
| 187 |
+
result_eyeglasses = yolo_world_model.predict(image, conf=eyeglasses_yolo_threshold, device=device, verbose=False)[0]
|
| 188 |
+
except Exception as e:
|
| 189 |
+
logging.error(f"Eyeglasses model inference failed at index {i}: {e}")
|
| 190 |
+
glasses_scores.append(None)
|
| 191 |
+
glasses_boxes.append(None)
|
| 192 |
+
continue
|
| 193 |
+
# 5. No eyeglasses detected.
|
| 194 |
+
if len(result_eyeglasses.boxes.cls) == 0:
|
| 195 |
+
glasses_scores.append(None)
|
| 196 |
+
glasses_boxes.append(None)
|
| 197 |
+
if save_filtered_image:
|
| 198 |
+
image.save(f"{img_dir_valid_face_no_glasses}/{i}.jpg")
|
| 199 |
+
continue
|
| 200 |
+
|
| 201 |
+
glasses_score = []
|
| 202 |
+
glasses_box = []
|
| 203 |
+
is_eyeglasses = True
|
| 204 |
+
for j in range(len(result_eyeglasses.boxes.conf)):
|
| 205 |
+
# Iterate across the detected glasses bboxes in current image.
|
| 206 |
+
category = result_eyeglasses.boxes.cls[j]
|
| 207 |
+
if category == cls_idx_map["eyeglasses"]:
|
| 208 |
+
score = result_eyeglasses.boxes.conf[j]
|
| 209 |
+
box_xyxy = [int(x) for x in result_eyeglasses.boxes.xyxy[j].tolist()] # [x0, y0, x1, y1]
|
| 210 |
+
glasses_score.append(score)
|
| 211 |
+
glasses_box.append(box_xyxy)
|
| 212 |
+
elif category == cls_idx_map["sunglasses"]:
|
| 213 |
+
is_eyeglasses = False
|
| 214 |
+
break
|
| 215 |
+
|
| 216 |
+
if not is_eyeglasses:
|
| 217 |
+
# 6. Sunglasses detected, drop the eyeglasses bbox.
|
| 218 |
+
glasses_scores.append(None)
|
| 219 |
+
glasses_boxes.append(None)
|
| 220 |
+
if save_filtered_image:
|
| 221 |
+
image.save(f"{img_dir_valid_face_with_sunglasses}/{i}.jpg")
|
| 222 |
+
else:
|
| 223 |
+
# 7. Sunglasses not detected, keep the eyeglasses bbox.
|
| 224 |
+
glasses_scores.append(torch.tensor(glasses_score)) # [n]
|
| 225 |
+
glasses_boxes.append(torch.tensor(glasses_box)) # [n, 4]
|
| 226 |
+
if save_filtered_image:
|
| 227 |
+
image.save(f"{img_dir_valid_face_with_eyeglasses}/{i}.jpg")
|
| 228 |
+
|
| 229 |
+
# No valid face: All of the four features are None.
|
| 230 |
+
# Valid face without eyeglasses: "face_score" and "face_box" has value. "glasses_score" and "glasses_box" are None.
|
| 231 |
+
# Valid face with eyeglasses: All of the four features are not None.
|
| 232 |
+
return {
|
| 233 |
+
"image": images,
|
| 234 |
+
"glasses_score": glasses_scores,
|
| 235 |
+
"glasses_box": glasses_boxes,
|
| 236 |
+
"face_score": face_scores,
|
| 237 |
+
"face_box": face_boxes,
|
| 238 |
+
}
|
| 239 |
+
|
| 240 |
+
# Load the first two shards of the wit-base dataset.
|
| 241 |
+
base_url = "https://huggingface.co/datasets/wikimedia/wit_base/resolve/main/data/"
|
| 242 |
+
data_files = {"train": [base_url + "train-00000-of-00330.parquet", base_url + "train-00001-of-00330.parquet"]}
|
| 243 |
+
wit = load_dataset("parquet", data_files=data_files, split="train", trust_remote_code=True).cast_column('image', Image())
|
| 244 |
+
|
| 245 |
+
# Select the curated subset for evaluation.
|
| 246 |
+
if eval_mode:
|
| 247 |
+
wit = wit.select(eval_data_idx)
|
| 248 |
+
save_filtered_image = True
|
| 249 |
+
|
| 250 |
+
# If the dataset is big, force the save_filtered_image to be `False`.
|
| 251 |
+
if len(wit) > 1000:
|
| 252 |
+
save_filtered_image = False
|
| 253 |
+
|
| 254 |
+
# Define new columns to store detection results.
|
| 255 |
+
features = {
|
| 256 |
+
"image": Image(),
|
| 257 |
+
"glasses_score": Sequence(feature=Value(dtype='float16', id=None), length=-1, id=None),
|
| 258 |
+
"glasses_box": Sequence(feature=Sequence(feature=Value(dtype='int16', id=None), length=-1, id=None), length=-1, id=None),
|
| 259 |
+
"face_score": Sequence(feature=Value(dtype='float16', id=None), length=-1, id=None),
|
| 260 |
+
"face_box": Sequence(feature=Sequence(feature=Value(dtype='int16', id=None), length=-1, id=None), length=-1, id=None)
|
| 261 |
+
}
|
| 262 |
+
# Delete unrelated columns.
|
| 263 |
+
remove_columns = wit.column_names
|
| 264 |
+
remove_columns.remove("image")
|
| 265 |
+
# Run the detection.
|
| 266 |
+
wit = wit.map(
|
| 267 |
+
detect_face_and_eyeglasses,
|
| 268 |
+
with_indices=True,
|
| 269 |
+
batched=True,
|
| 270 |
+
batch_size=batch_size,
|
| 271 |
+
features=Features(features),
|
| 272 |
+
remove_columns=remove_columns
|
| 273 |
+
)
|
| 274 |
+
|
| 275 |
+
# Filter the dataset based on detection result.
|
| 276 |
+
wit_filter = wit.filter(lambda example: example["glasses_score"])
|
| 277 |
+
|
| 278 |
+
# Save the filtered dataset as parquet file.
|
| 279 |
+
wit_filter.to_parquet(output_filtered_data_file_path)
|
| 280 |
+
|
| 281 |
+
if __name__ == "__main__":
|
| 282 |
+
main()
|