qaihm-bot commited on
Commit
89939e1
·
verified ·
1 Parent(s): 439d48a

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -37,8 +37,8 @@ More details on model performance across various devices, can be found
37
 
38
  | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
39
  | ---|---|---|---|---|---|---|---|
40
- | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite | 3.257 ms | 0 - 2 MB | FP16 | NPU | [ConvNext-Tiny.tflite](https://huggingface.co/qualcomm/ConvNext-Tiny/blob/main/ConvNext-Tiny.tflite)
41
- | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | QNN Model Library | 3.793 ms | 0 - 181 MB | FP16 | NPU | [ConvNext-Tiny.so](https://huggingface.co/qualcomm/ConvNext-Tiny/blob/main/ConvNext-Tiny.so)
42
 
43
 
44
 
@@ -100,7 +100,7 @@ python -m qai_hub_models.models.convnext_tiny.export
100
  Profile Job summary of ConvNext-Tiny
101
  --------------------------------------------------
102
  Device: Snapdragon X Elite CRD (11)
103
- Estimated Inference Time: 3.62 ms
104
  Estimated Peak Memory Range: 0.57-0.57 MB
105
  Compute Units: NPU (223) | Total (223)
106
 
 
37
 
38
  | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
39
  | ---|---|---|---|---|---|---|---|
40
+ | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite | 3.313 ms | 0 - 32 MB | FP16 | NPU | [ConvNext-Tiny.tflite](https://huggingface.co/qualcomm/ConvNext-Tiny/blob/main/ConvNext-Tiny.tflite)
41
+ | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | QNN Model Library | 3.839 ms | 0 - 130 MB | FP16 | NPU | [ConvNext-Tiny.so](https://huggingface.co/qualcomm/ConvNext-Tiny/blob/main/ConvNext-Tiny.so)
42
 
43
 
44
 
 
100
  Profile Job summary of ConvNext-Tiny
101
  --------------------------------------------------
102
  Device: Snapdragon X Elite CRD (11)
103
+ Estimated Inference Time: 3.63 ms
104
  Estimated Peak Memory Range: 0.57-0.57 MB
105
  Compute Units: NPU (223) | Total (223)
106