Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
|
@@ -35,8 +35,8 @@ More details on model performance across various devices, can be found
|
|
| 35 |
|
| 36 |
| Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
|
| 37 |
| ---|---|---|---|---|---|---|---|
|
| 38 |
-
| Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite |
|
| 39 |
-
| Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite |
|
| 40 |
|
| 41 |
|
| 42 |
|
|
@@ -98,17 +98,17 @@ python -m qai_hub_models.models.sam.export
|
|
| 98 |
```
|
| 99 |
Profile Job summary of SAMDecoder
|
| 100 |
--------------------------------------------------
|
| 101 |
-
Device:
|
| 102 |
-
Estimated Inference Time:
|
| 103 |
-
Estimated Peak Memory Range:
|
| 104 |
-
Compute Units: NPU (
|
| 105 |
|
| 106 |
Profile Job summary of SAMEncoder
|
| 107 |
--------------------------------------------------
|
| 108 |
-
Device:
|
| 109 |
-
Estimated Inference Time:
|
| 110 |
-
Estimated Peak Memory Range:
|
| 111 |
-
Compute Units: GPU (
|
| 112 |
|
| 113 |
|
| 114 |
```
|
|
@@ -129,29 +129,49 @@ in memory using the `jit.trace` and then call the `submit_compile_job` API.
|
|
| 129 |
import torch
|
| 130 |
|
| 131 |
import qai_hub as hub
|
| 132 |
-
from qai_hub_models.models.sam import
|
| 133 |
|
| 134 |
# Load the model
|
| 135 |
-
|
|
|
|
|
|
|
|
|
|
| 136 |
|
| 137 |
# Device
|
| 138 |
device = hub.Device("Samsung Galaxy S23")
|
| 139 |
|
|
|
|
| 140 |
# Trace model
|
| 141 |
-
|
| 142 |
-
|
| 143 |
|
| 144 |
-
|
| 145 |
|
| 146 |
# Compile model on a specific device
|
| 147 |
-
|
| 148 |
-
model=
|
| 149 |
device=device,
|
| 150 |
-
input_specs=
|
| 151 |
)
|
| 152 |
|
| 153 |
# Get target model to run on-device
|
| 154 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 155 |
|
| 156 |
```
|
| 157 |
|
|
@@ -163,10 +183,16 @@ After compiling models from step 1. Models can be profiled model on-device using
|
|
| 163 |
provisioned in the cloud. Once the job is submitted, you can navigate to a
|
| 164 |
provided job URL to view a variety of on-device performance metrics.
|
| 165 |
```python
|
| 166 |
-
|
| 167 |
-
|
| 168 |
-
|
| 169 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 170 |
|
| 171 |
```
|
| 172 |
|
|
@@ -175,14 +201,20 @@ Step 3: **Verify on-device accuracy**
|
|
| 175 |
To verify the accuracy of the model on-device, you can run on-device inference
|
| 176 |
on sample input data on the same cloud hosted device.
|
| 177 |
```python
|
| 178 |
-
|
| 179 |
-
|
| 180 |
-
|
| 181 |
-
|
| 182 |
-
|
| 183 |
-
)
|
| 184 |
-
|
| 185 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 186 |
|
| 187 |
```
|
| 188 |
With the output of the model, you can compute like PSNR, relative errors or
|
|
|
|
| 35 |
|
| 36 |
| Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
|
| 37 |
| ---|---|---|---|---|---|---|---|
|
| 38 |
+
| Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite | 47.928 ms | 4 - 22 MB | FP16 | NPU | [SAMDecoder.tflite](https://huggingface.co/qualcomm/Segment-Anything-Model/blob/main/SAMDecoder.tflite)
|
| 39 |
+
| Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite | 10523.061 ms | 2482 - 2528 MB | FP32 | CPU | [SAMEncoder.tflite](https://huggingface.co/qualcomm/Segment-Anything-Model/blob/main/SAMEncoder.tflite)
|
| 40 |
|
| 41 |
|
| 42 |
|
|
|
|
| 98 |
```
|
| 99 |
Profile Job summary of SAMDecoder
|
| 100 |
--------------------------------------------------
|
| 101 |
+
Device: SA8255 (Proxy) (13)
|
| 102 |
+
Estimated Inference Time: 48.42 ms
|
| 103 |
+
Estimated Peak Memory Range: 2.11-19.94 MB
|
| 104 |
+
Compute Units: NPU (337) | Total (337)
|
| 105 |
|
| 106 |
Profile Job summary of SAMEncoder
|
| 107 |
--------------------------------------------------
|
| 108 |
+
Device: SA8255 (Proxy) (13)
|
| 109 |
+
Estimated Inference Time: 11251.67 ms
|
| 110 |
+
Estimated Peak Memory Range: 2591.24-2594.47 MB
|
| 111 |
+
Compute Units: GPU (36),CPU (782) | Total (818)
|
| 112 |
|
| 113 |
|
| 114 |
```
|
|
|
|
| 129 |
import torch
|
| 130 |
|
| 131 |
import qai_hub as hub
|
| 132 |
+
from qai_hub_models.models.sam import SAMDecoder,SAMEncoder
|
| 133 |
|
| 134 |
# Load the model
|
| 135 |
+
get_sam_decoder()_model = SAMDecoder.from_pretrained()
|
| 136 |
+
|
| 137 |
+
get_sam_encoder()_model = SAMEncoder.from_pretrained()
|
| 138 |
+
|
| 139 |
|
| 140 |
# Device
|
| 141 |
device = hub.Device("Samsung Galaxy S23")
|
| 142 |
|
| 143 |
+
|
| 144 |
# Trace model
|
| 145 |
+
get_sam_decoder()_input_shape = get_sam_decoder()_model.get_input_spec()
|
| 146 |
+
get_sam_decoder()_sample_inputs = get_sam_decoder()_model.sample_inputs()
|
| 147 |
|
| 148 |
+
traced_get_sam_decoder()_model = torch.jit.trace(get_sam_decoder()_model, [torch.tensor(data[0]) for _, data in get_sam_decoder()_sample_inputs.items()])
|
| 149 |
|
| 150 |
# Compile model on a specific device
|
| 151 |
+
get_sam_decoder()_compile_job = hub.submit_compile_job(
|
| 152 |
+
model=traced_get_sam_decoder()_model ,
|
| 153 |
device=device,
|
| 154 |
+
input_specs=get_sam_decoder()_model.get_input_spec(),
|
| 155 |
)
|
| 156 |
|
| 157 |
# Get target model to run on-device
|
| 158 |
+
get_sam_decoder()_target_model = get_sam_decoder()_compile_job.get_target_model()
|
| 159 |
+
|
| 160 |
+
# Trace model
|
| 161 |
+
get_sam_encoder()_input_shape = get_sam_encoder()_model.get_input_spec()
|
| 162 |
+
get_sam_encoder()_sample_inputs = get_sam_encoder()_model.sample_inputs()
|
| 163 |
+
|
| 164 |
+
traced_get_sam_encoder()_model = torch.jit.trace(get_sam_encoder()_model, [torch.tensor(data[0]) for _, data in get_sam_encoder()_sample_inputs.items()])
|
| 165 |
+
|
| 166 |
+
# Compile model on a specific device
|
| 167 |
+
get_sam_encoder()_compile_job = hub.submit_compile_job(
|
| 168 |
+
model=traced_get_sam_encoder()_model ,
|
| 169 |
+
device=device,
|
| 170 |
+
input_specs=get_sam_encoder()_model.get_input_spec(),
|
| 171 |
+
)
|
| 172 |
+
|
| 173 |
+
# Get target model to run on-device
|
| 174 |
+
get_sam_encoder()_target_model = get_sam_encoder()_compile_job.get_target_model()
|
| 175 |
|
| 176 |
```
|
| 177 |
|
|
|
|
| 183 |
provisioned in the cloud. Once the job is submitted, you can navigate to a
|
| 184 |
provided job URL to view a variety of on-device performance metrics.
|
| 185 |
```python
|
| 186 |
+
|
| 187 |
+
get_sam_decoder()_profile_job = hub.submit_profile_job(
|
| 188 |
+
model=get_sam_decoder()_target_model,
|
| 189 |
+
device=device,
|
| 190 |
+
)
|
| 191 |
+
|
| 192 |
+
get_sam_encoder()_profile_job = hub.submit_profile_job(
|
| 193 |
+
model=get_sam_encoder()_target_model,
|
| 194 |
+
device=device,
|
| 195 |
+
)
|
| 196 |
|
| 197 |
```
|
| 198 |
|
|
|
|
| 201 |
To verify the accuracy of the model on-device, you can run on-device inference
|
| 202 |
on sample input data on the same cloud hosted device.
|
| 203 |
```python
|
| 204 |
+
get_sam_decoder()_input_data = get_sam_decoder()_model.sample_inputs()
|
| 205 |
+
get_sam_decoder()_inference_job = hub.submit_inference_job(
|
| 206 |
+
model=get_sam_decoder()_target_model,
|
| 207 |
+
device=device,
|
| 208 |
+
inputs=get_sam_decoder()_input_data,
|
| 209 |
+
)
|
| 210 |
+
get_sam_decoder()_inference_job.download_output_data()
|
| 211 |
+
get_sam_encoder()_input_data = get_sam_encoder()_model.sample_inputs()
|
| 212 |
+
get_sam_encoder()_inference_job = hub.submit_inference_job(
|
| 213 |
+
model=get_sam_encoder()_target_model,
|
| 214 |
+
device=device,
|
| 215 |
+
inputs=get_sam_encoder()_input_data,
|
| 216 |
+
)
|
| 217 |
+
get_sam_encoder()_inference_job.download_output_data()
|
| 218 |
|
| 219 |
```
|
| 220 |
With the output of the model, you can compute like PSNR, relative errors or
|