eDifFIQA ONNX
ONNX exports of the eDifFIQA face image quality models.
These models estimate the visual quality of a face image. They can be used before face recognition or face verification to filter low-quality face crops.
Note: The models expect cropped/aligned face images.
Export Details
Models were exported with torch==2.10.0 and simplified with onnxslim==0.1.93. All models were exported with dynamic batch size support and ONNX opset 20.
Available Models
| Variant | File | Size |
|---|---|---|
| eDifFIQA-T | ediffiqa_t.onnx |
tiny |
| eDifFIQA-S | ediffiqa_s.onnx |
small |
| eDifFIQA-M | ediffiqa_m.onnx |
medium |
| eDifFIQA-L | ediffiqa_l.onnx |
large |
Usage
from pathlib import Path
import cv2
import numpy as np
import onnxruntime as ort
class EDifFIQAOnnx:
def __init__(
self,
model_path: Path | str,
input_size: tuple[int, int] = (112, 112),
providers: list[str] | None = None,
) -> None:
self.model_path = str(model_path)
self.input_size = input_size
self.session = ort.InferenceSession(
self.model_path,
providers=providers or ["CPUExecutionProvider"],
)
self.input_name = self.session.get_inputs()[0].name
self.output_name = self.session.get_outputs()[0].name
def infer(self, image_bgr: np.ndarray) -> float:
"""
Run quality inference on a BGR-aligned face image.
:param image_bgr: BGR face image.
:return: Quality score.
"""
tensor = self._preprocess(image_bgr)
output = self.session.run([self.output_name], {self.input_name: tensor})[0]
return float(np.squeeze(output))
def _preprocess(self, image_bgr: np.ndarray) -> np.ndarray:
"""
Preprocess image.
:param image_bgr: BGR image.
:return: Input tensor with shape (1, 3, 112, 112).
"""
image_rgb = cv2.cvtColor(image_bgr, cv2.COLOR_BGR2RGB)
image_rgb = cv2.resize(image_rgb, self.input_size)
image = image_rgb.astype(np.float32)
image = ((image / 255.0) - 0.5) / 0.5
image = np.transpose(image, (2, 0, 1))[None, ...]
return image.astype(np.float32)
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support