qaihm-bot commited on
Commit
c5a27e0
·
verified ·
1 Parent(s): 56394ac

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +6 -6
README.md CHANGED
@@ -32,8 +32,8 @@ More details on model performance across various devices, can be found
32
 
33
  | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
34
  | ---|---|---|---|---|---|---|---|
35
- | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite | 1.338 ms | 0 - 1 MB | FP16 | NPU | [QuickSRNetSmall.tflite](https://huggingface.co/qualcomm/QuickSRNetSmall/blob/main/QuickSRNetSmall.tflite)
36
- | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | QNN Model Library | 1.025 ms | 0 - 36 MB | FP16 | NPU | [QuickSRNetSmall.so](https://huggingface.co/qualcomm/QuickSRNetSmall/blob/main/QuickSRNetSmall.so)
37
 
38
 
39
  ## Installation
@@ -94,16 +94,16 @@ python -m qai_hub_models.models.quicksrnetsmall.export
94
  Profile Job summary of QuickSRNetSmall
95
  --------------------------------------------------
96
  Device: Samsung Galaxy S24 (14)
97
- Estimated Inference Time: 0.84 ms
98
- Estimated Peak Memory Range: 0.02-16.95 MB
99
  Compute Units: NPU (8),CPU (3) | Total (11)
100
 
101
  Profile Job summary of QuickSRNetSmall
102
  --------------------------------------------------
103
  Device: Samsung Galaxy S24 (14)
104
  Estimated Inference Time: 0.62 ms
105
- Estimated Peak Memory Range: 0.20-13.35 MB
106
- Compute Units: NPU (12) | Total (12)
107
 
108
 
109
  ```
 
32
 
33
  | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
34
  | ---|---|---|---|---|---|---|---|
35
+ | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite | 1.324 ms | 0 - 15 MB | FP16 | NPU | [QuickSRNetSmall.tflite](https://huggingface.co/qualcomm/QuickSRNetSmall/blob/main/QuickSRNetSmall.tflite)
36
+ | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | QNN Model Library | 1.01 ms | 0 - 8 MB | FP16 | NPU | [QuickSRNetSmall.so](https://huggingface.co/qualcomm/QuickSRNetSmall/blob/main/QuickSRNetSmall.so)
37
 
38
 
39
  ## Installation
 
94
  Profile Job summary of QuickSRNetSmall
95
  --------------------------------------------------
96
  Device: Samsung Galaxy S24 (14)
97
+ Estimated Inference Time: 0.94 ms
98
+ Estimated Peak Memory Range: 0.02-17.17 MB
99
  Compute Units: NPU (8),CPU (3) | Total (11)
100
 
101
  Profile Job summary of QuickSRNetSmall
102
  --------------------------------------------------
103
  Device: Samsung Galaxy S24 (14)
104
  Estimated Inference Time: 0.62 ms
105
+ Estimated Peak Memory Range: 0.22-13.52 MB
106
+ Compute Units: NPU (11) | Total (11)
107
 
108
 
109
  ```