alonsorobots commited on
Commit
8c46cab
·
verified ·
1 Parent(s): d63f0ae

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +137 -0
README.md ADDED
@@ -0,0 +1,137 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ license_name: insightface-non-commercial
4
+ license_link: https://github.com/deepinsight/insightface#license
5
+ tags:
6
+ - face-detection
7
+ - face-recognition
8
+ - scrfd
9
+ - arcface
10
+ - onnx
11
+ - batch-inference
12
+ - tensorrt
13
+ library_name: onnx
14
+ pipeline_tag: image-classification
15
+ ---
16
+
17
+ # InsightFace Batch-Optimized Models (Max Batch 64)
18
+
19
+ Re-exported InsightFace models with **proper dynamic batch support** and **no cross-frame contamination**.
20
+
21
+ ## ⚠️ Version Difference
22
+
23
+ | Repository | Max Batch | Best For |
24
+ |------------|-----------|----------|
25
+ | [alonsorobots/scrfd_320_batched](https://huggingface.co/alonsorobots/scrfd_320_batched) | 1-32 | Standard use, tested extensively |
26
+ | **This repo** | **1-64** | Experimentation with larger batches |
27
+
28
+ **Recommendation:** Use max batch=32 for optimal performance. Batch=64 provides similar throughput but uses more VRAM.
29
+
30
+ ## Why These Models?
31
+
32
+ The original InsightFace ONNX models have issues with batch inference:
33
+
34
+ - `buffalo_l` detection model: hardcoded batch=1
35
+ - `buffalo_l_batch` detection model: **broken** - has cross-frame contamination due to reshape operations that flatten the batch dimension
36
+
37
+ These re-exports fix the `dynamic_axes` in the ONNX graph for **true batch inference**.
38
+
39
+ ## Models
40
+
41
+ | Model | Task | Input Shape | Output | Batch | Speedup |
42
+ |-------|------|-------------|--------|-------|---------|
43
+ | `scrfd_10g_320_batch64.onnx` | Face Detection | `[N, 3, 320, 320]` | boxes, landmarks | 1-64 | **6×** |
44
+ | `arcface_w600k_r50_batch64.onnx` | Face Embedding | `[N, 3, 112, 112]` | 512-dim vectors | 1-64 | **10×** |
45
+
46
+ ## Performance (TensorRT FP16, RTX 5090)
47
+
48
+ ### Batch Size Comparison (Full Video, 12,263 frames)
49
+
50
+ | Batch Size | FPS | Relative |
51
+ |------------|-----|----------|
52
+ | 16 | 2,007 | 1.00× |
53
+ | **32** | **2,097** | **1.05×** ✅ Optimal |
54
+ | 64 | 2,034 | 1.01× |
55
+
56
+ **Key Finding:** Batch=32 is optimal. Batch=64 provides no additional benefit due to GPU memory bandwidth saturation.
57
+
58
+ ### With Pipelined Preprocessing (4 workers)
59
+
60
+ | Configuration | FPS | Speedup |
61
+ |---------------|-----|---------|
62
+ | Sequential batch=16 | 1,211 | baseline |
63
+ | **Pipelined batch=32** | **2,097** | **1.73×** |
64
+
65
+ ## Usage
66
+
67
+ ```python
68
+ import numpy as np
69
+ import onnxruntime as ort
70
+
71
+ # Load model
72
+ sess = ort.InferenceSession("scrfd_10g_320_batch64.onnx",
73
+ providers=["TensorrtExecutionProvider", "CUDAExecutionProvider"])
74
+
75
+ # Batch inference (any size from 1-64)
76
+ batch = np.random.randn(32, 3, 320, 320).astype(np.float32)
77
+ outputs = sess.run(None, {"input.1": batch})
78
+
79
+ # outputs[0-2]: scores per FPN level (stride 8, 16, 32)
80
+ # outputs[3-5]: bboxes per FPN level
81
+ # outputs[6-8]: keypoints per FPN level
82
+ ```
83
+
84
+ ## TensorRT Configuration
85
+
86
+ When using TensorRT, set profile shapes to support your desired batch range:
87
+
88
+ ```python
89
+ providers = [
90
+ ("TensorrtExecutionProvider", {
91
+ "trt_fp16_enable": True,
92
+ "trt_engine_cache_enable": True,
93
+ "trt_profile_min_shapes": "input.1:1x3x320x320",
94
+ "trt_profile_opt_shapes": "input.1:32x3x320x320", # Optimize for batch=32
95
+ "trt_profile_max_shapes": "input.1:64x3x320x320", # Support up to 64
96
+ }),
97
+ "CUDAExecutionProvider",
98
+ ]
99
+ ```
100
+
101
+ ## Verified: No Batch Contamination
102
+
103
+ ```python
104
+ # Same frame processed alone vs in batch = identical results
105
+ single_output = sess.run(None, {"input.1": frame[np.newaxis, ...]})
106
+ batch[7] = frame
107
+ batch_output = sess.run(None, {"input.1": batch})
108
+
109
+ max_diff = np.max(np.abs(single_output[0] - batch_output[0][7]))
110
+ # max_diff < 1e-5 ✓
111
+ ```
112
+
113
+ ## Re-export Process
114
+
115
+ These models were re-exported from InsightFace's PyTorch source using MMDetection with proper `dynamic_axes`:
116
+
117
+ ```python
118
+ dynamic_axes = {
119
+ "input.1": {0: "batch"},
120
+ "score_8": {0: "batch"},
121
+ "score_16": {0: "batch"},
122
+ # ... all outputs
123
+ }
124
+ ```
125
+
126
+ ## License
127
+
128
+ **Non-commercial research purposes only** - per [InsightFace license](https://github.com/deepinsight/insightface#license).
129
+
130
+ For commercial licensing, contact: `recognition-oss-pack@insightface.ai`
131
+
132
+ ## Credits
133
+
134
+ - Original models: [InsightFace](https://github.com/deepinsight/insightface) by Jia Guo et al.
135
+ - SCRFD paper: [Sample and Computation Redistribution for Efficient Face Detection](https://arxiv.org/abs/2105.04714)
136
+ - ArcFace paper: [ArcFace: Additive Angular Margin Loss for Deep Face Recognition](https://arxiv.org/abs/1801.07698)
137
+