Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
|
@@ -36,7 +36,7 @@ More details on model performance across various devices, can be found
|
|
| 36 |
|
| 37 |
| Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
|
| 38 |
| ---|---|---|---|---|---|---|---|
|
| 39 |
-
| Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite | 6.
|
| 40 |
|
| 41 |
|
| 42 |
## Installation
|
|
@@ -93,16 +93,6 @@ device. This script does the following:
|
|
| 93 |
python -m qai_hub_models.models.ddrnet23_slim.export
|
| 94 |
```
|
| 95 |
|
| 96 |
-
```
|
| 97 |
-
Profile Job summary of DDRNet23-Slim
|
| 98 |
-
--------------------------------------------------
|
| 99 |
-
Device: QCS8550 (Proxy) (12)
|
| 100 |
-
Estimated Inference Time: 6.68 ms
|
| 101 |
-
Estimated Peak Memory Range: 0.96-2.92 MB
|
| 102 |
-
Compute Units: NPU (131) | Total (131)
|
| 103 |
-
|
| 104 |
-
|
| 105 |
-
```
|
| 106 |
## How does this work?
|
| 107 |
|
| 108 |
This [export script](https://github.com/quic/ai-hub-models/blob/main/qai_hub_models/models/DDRNet23-Slim/export.py)
|
|
|
|
| 36 |
|
| 37 |
| Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
|
| 38 |
| ---|---|---|---|---|---|---|---|
|
| 39 |
+
| Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite | 6.617 ms | 0 - 2 MB | FP16 | NPU | [DDRNet23-Slim.tflite](https://huggingface.co/qualcomm/DDRNet23-Slim/blob/main/DDRNet23-Slim.tflite)
|
| 40 |
|
| 41 |
|
| 42 |
## Installation
|
|
|
|
| 93 |
python -m qai_hub_models.models.ddrnet23_slim.export
|
| 94 |
```
|
| 95 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 96 |
## How does this work?
|
| 97 |
|
| 98 |
This [export script](https://github.com/quic/ai-hub-models/blob/main/qai_hub_models/models/DDRNet23-Slim/export.py)
|