VGGFace2 INT8 Face Recognition Model

A high-accuracy INT8 quantized face recognition model optimized for edge devices and challenging image conditions such as low lighting, low resolution, and distorted images.

This model produces 512-dimensional face embeddings and is designed for fast CPU inference using ONNX Runtime.

The model uses a two-stage pipeline:

  1. Face Detection & Alignment using buffalo_s from InsightFace
  2. Face Recognition using a VGGFace2 INT8 ONNX model

Compared to the standard buffalo_l recognition model, this model shows over 20% higher accuracy in difficult scenarios such as:

  • Dim lighting
  • Low-quality images
  • Compressed images
  • Slightly distorted faces
  • Long-distance face captures

Model Details

Property Value
Architecture VGGFace2
Precision INT8 Quantized
Embedding Size 512
Input Size 160 × 160
Framework ONNX
Inference Engine ONNX Runtime
Detection Model InsightFace buffalo_s
Task Face Recognition / Verification

Key Features

Robust Recognition

Designed to handle:

  • dim lighting conditions
  • blurred images
  • low resolution faces
  • partially distorted images

Edge Device Optimization

INT8 quantization provides:

  • smaller model size
  • faster CPU inference
  • lower memory usage
  • reduced latency

Suitable for:

  • edge AI systems
  • embedded devices
  • real-time recognition pipelines

Face Recognition Pipeline

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support