qaihm-bot commited on
Commit
e976cd9
·
verified ·
1 Parent(s): ae3775d

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +12 -6
README.md CHANGED
@@ -30,10 +30,13 @@ More details on model performance across various devices, can be found
30
  - Model size: 64.0 MB
31
 
32
 
 
 
33
  | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
34
  | ---|---|---|---|---|---|---|---|
35
- | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite | 67.687 ms | 3 - 8 MB | FP16 | NPU | [ESRGAN.tflite](https://huggingface.co/qualcomm/ESRGAN/blob/main/ESRGAN.tflite)
36
- | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | QNN Model Library | 66.775 ms | 0 - 101 MB | FP16 | NPU | [ESRGAN.so](https://huggingface.co/qualcomm/ESRGAN/blob/main/ESRGAN.so)
 
37
 
38
 
39
  ## Installation
@@ -94,15 +97,17 @@ python -m qai_hub_models.models.esrgan.export
94
  Profile Job summary of ESRGAN
95
  --------------------------------------------------
96
  Device: Snapdragon X Elite CRD (11)
97
- Estimated Inference Time: 73.24 ms
98
- Estimated Peak Memory Range: 0.20-0.20 MB
99
  Compute Units: NPU (1026) | Total (1026)
100
 
101
 
102
  ```
 
 
103
  ## How does this work?
104
 
105
- This [export script](https://github.com/quic/ai-hub-models/blob/main/qai_hub_models/models/ESRGAN/export.py)
106
  leverages [Qualcomm® AI Hub](https://aihub.qualcomm.com/) to optimize, validate, and deploy this model
107
  on-device. Lets go through each step below in detail:
108
 
@@ -180,6 +185,7 @@ AI Hub. [Sign up for access](https://myaccount.qualcomm.com/signup).
180
 
181
 
182
 
 
183
  ## Deploying compiled model to Android
184
 
185
 
@@ -201,7 +207,7 @@ Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
201
  ## License
202
  - The license for the original implementation of ESRGAN can be found
203
  [here](https://github.com/xinntao/ESRGAN/blob/master/LICENSE).
204
- - The license for the compiled assets for on-device deployment can be found [here]({deploy_license_url})
205
 
206
  ## References
207
  * [ESRGAN: Enhanced Super-Resolution Generative Adversarial Networks](https://arxiv.org/abs/1809.00219)
 
30
  - Model size: 64.0 MB
31
 
32
 
33
+
34
+
35
  | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
36
  | ---|---|---|---|---|---|---|---|
37
+ | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite | 66.52 ms | 4 - 7 MB | FP16 | NPU | [ESRGAN.tflite](https://huggingface.co/qualcomm/ESRGAN/blob/main/ESRGAN.tflite)
38
+ | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | QNN Model Library | 67.593 ms | 0 - 100 MB | FP16 | NPU | [ESRGAN.so](https://huggingface.co/qualcomm/ESRGAN/blob/main/ESRGAN.so)
39
+
40
 
41
 
42
  ## Installation
 
97
  Profile Job summary of ESRGAN
98
  --------------------------------------------------
99
  Device: Snapdragon X Elite CRD (11)
100
+ Estimated Inference Time: 73.14 ms
101
+ Estimated Peak Memory Range: 0.21-0.21 MB
102
  Compute Units: NPU (1026) | Total (1026)
103
 
104
 
105
  ```
106
+
107
+
108
  ## How does this work?
109
 
110
+ This [export script](https://aihub.qualcomm.com/models/esrgan/qai_hub_models/models/ESRGAN/export.py)
111
  leverages [Qualcomm® AI Hub](https://aihub.qualcomm.com/) to optimize, validate, and deploy this model
112
  on-device. Lets go through each step below in detail:
113
 
 
185
 
186
 
187
 
188
+
189
  ## Deploying compiled model to Android
190
 
191
 
 
207
  ## License
208
  - The license for the original implementation of ESRGAN can be found
209
  [here](https://github.com/xinntao/ESRGAN/blob/master/LICENSE).
210
+ - The license for the compiled assets for on-device deployment can be found [here](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/Qualcomm+AI+Hub+Proprietary+License.pdf)
211
 
212
  ## References
213
  * [ESRGAN: Enhanced Super-Resolution Generative Adversarial Networks](https://arxiv.org/abs/1809.00219)