VGGFace2 INT8 Face Recognition Model
A high-accuracy INT8 quantized face recognition model optimized for edge devices and challenging image conditions such as low lighting, low resolution, and distorted images.
This model produces 512-dimensional face embeddings and is designed for fast CPU inference using ONNX Runtime.
The model uses a two-stage pipeline:
- Face Detection & Alignment using
buffalo_sfrom InsightFace - Face Recognition using a VGGFace2 INT8 ONNX model
Compared to the standard buffalo_l recognition model, this model shows over 20% higher accuracy in difficult scenarios such as:
- Dim lighting
- Low-quality images
- Compressed images
- Slightly distorted faces
- Long-distance face captures
Model Details
| Property | Value |
|---|---|
| Architecture | VGGFace2 |
| Precision | INT8 Quantized |
| Embedding Size | 512 |
| Input Size | 160 × 160 |
| Framework | ONNX |
| Inference Engine | ONNX Runtime |
| Detection Model | InsightFace buffalo_s |
| Task | Face Recognition / Verification |
Key Features
Robust Recognition
Designed to handle:
- dim lighting conditions
- blurred images
- low resolution faces
- partially distorted images
Edge Device Optimization
INT8 quantization provides:
- smaller model size
- faster CPU inference
- lower memory usage
- reduced latency
Suitable for:
- edge AI systems
- embedded devices
- real-time recognition pipelines
Face Recognition Pipeline
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support