| | --- |
| | license: apache-2.0 |
| | language: |
| | - en |
| | pipeline_tag: image-classification |
| | --- |
| | # π§ Face Recognition System (ArcFace + YOLOv8) |
| |
|
| |  |
| |  |
| |  |
| |  |
| |
|
| | ## π Overview |
| |
|
| | This repository hosts a production-ready **Face Recognition Pipeline** designed for high-accuracy biometric identification. Unlike standard recognizers, this system integrates **YOLOv8** for robust face detection and alignment before feature extraction. |
| |
|
| | The core recognition model is built upon a **Wide ResNet-101-2** backbone, trained with a hybrid loss function (**ArcFace + Center Loss**) to generate highly discriminative 512-dimensional embeddings. |
| |
|
| | ### π Key Features |
| | - **Robust Detection**: Uses **YOLOv8 (ONNX)** to detect faces even in challenging lighting or angles. |
| | - **High Accuracy**: Achieves **90.5%** accuracy on the LFW (Labeled Faces in the Wild) dataset and 90% on Validation. |
| | - **Discriminative Embeddings**: 512-dim vectors optimized for Cosine Similarity. |
| | - **Easy-to-Use API**: Includes a wrapper (`inference.py`) for 3-line code implementation. |
| | - **Fine-tuning Ready**: Includes scripts to retrain the model on your custom dataset. |
| |
|
| | --- |
| |
|
| | ## π οΈ Installation |
| |
|
| | To run the pipeline, you need to install the necessary dependencies. We recommend using a virtual environment. |
| |
|
| | ```bash |
| | pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118 # For CUDA support |
| | pip install opencv-python onnxruntime-gpu huggingface_hub pillow tqdm numpy |
| | ``` |
| | ## Step 1: Download the Wrapper |
| | - **Download our helper script inference.py which handles model downloading and YOLO detection automatically.** |
| | ```bash |
| | wget https://huggingface.co/biometric-ai-lab/Face_Recognition/resolve/main/inference.py |
| | ``` |
| | --- |
| | ## Step 2: Create & Run Python Script |
| | - **Create a new file named run_demo.py.** |
| | - **Copy and paste the code below into it.** |
| | - **Make sure you have 2 images to test (e.g., face1.jpg and face2.jpg).** |
| | ```bash |
| | # File: run_demo.py |
| | from inference import FaceAnalysis |
| | |
| | # 1. Initialize the AI (Downloads models automatically on first run) |
| | print("β³ Initializing models...") |
| | app = FaceAnalysis() |
| | |
| | # 2. Define your images |
| | img1_path = "face1.jpg" # <--- Change this to your image path |
| | img2_path = "face2.jpg" # <--- Change this to your image path |
| | |
| | # 3. Run Comparison |
| | print(f"π Comparing {img1_path} vs {img2_path}...") |
| | |
| | try: |
| | # Get similarity score and boolean result |
| | similarity, is_same = app.compare(img1_path, img2_path) |
| | |
| | print("-" * 30) |
| | print(f"πΉ Similarity Score: {similarity:.4f}") |
| | print("-" * 30) |
| | |
| | if is_same: |
| | print("β
RESULT: SAME PERSON") |
| | else: |
| | print("β RESULT: DIFFERENT PERSON") |
| | |
| | except Exception as e: |
| | print(f"Error: {e}") |
| | print("Tip: Make sure the image paths are correct!") |
| | ``` |
| | --- |
| | ## π Training Guide |
| | Option: Full Training (Advanced): Use train.py to train the model from scratch (ImageNet weights) on a large dataset. |
| | **Step 1: Prepare Dataset** |
| | - **Organize images in ImageFolder format** |
| | ```bash |
| | dataset/ |
| | βββ person_1/ |
| | β βββ img1.jpg |
| | β βββ ... |
| | βββ person_2/ |
| | βββ img1.jpg |
| | ``` |
| | **Step 2: Run Training** |
| | ```bash |
| | python train.py \\ |
| | --data_dir ./dataset \\ |
| | --output_dir ./checkpoints \\ |
| | --epochs 50 \\ |
| | --batch_size 64 \\ |
| | --lr_backbone 8e-6 \\ |
| | --lr_head 8e-5 |
| | ``` |
| | ## π About This Project |
| | This project is developed by a group of undergraduate students |
| | from **Ho Chi Minh City University of Technology and Education (HCMUTE)**, |
| | **Cohort K23**, as part of academic research and learning activities. |