v0.42.0
Browse filesSee https://github.com/quic/ai-hub-models/releases/v0.42.0 for changelog.
- DeformableDETR_float.onnx.zip +2 -2
- README.md +8 -8
DeformableDETR_float.onnx.zip
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:01f8718baa8600370bd0696e2fd5f1fd5ea7307b397984791bdf6d700a06e1fc
|
| 3 |
+
size 152084144
|
README.md
CHANGED
|
@@ -36,10 +36,10 @@ More details on model performance across various devices, can be found
|
|
| 36 |
|
| 37 |
| Model | Precision | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Primary Compute Unit | Target Model
|
| 38 |
|---|---|---|---|---|---|---|---|---|
|
| 39 |
-
| DeformableDETR | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | ONNX |
|
| 40 |
-
| DeformableDETR | float | Samsung Galaxy S25 | Snapdragon® 8 Elite For Galaxy Mobile | ONNX |
|
| 41 |
-
| DeformableDETR | float | Snapdragon 8 Elite Gen 5 QRD | Snapdragon® 8 Elite Gen5 Mobile | ONNX |
|
| 42 |
-
| DeformableDETR | float | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX |
|
| 43 |
|
| 44 |
|
| 45 |
|
|
@@ -54,9 +54,9 @@ pip install "qai-hub-models[deformable-detr]"
|
|
| 54 |
```
|
| 55 |
|
| 56 |
|
| 57 |
-
## Configure Qualcomm® AI Hub to run this model on a cloud-hosted device
|
| 58 |
|
| 59 |
-
Sign-in to [Qualcomm® AI Hub](https://
|
| 60 |
Qualcomm® ID. Once signed in navigate to `Account -> Settings -> API Token`.
|
| 61 |
|
| 62 |
With this API token, you can configure your client to run models on the cloud
|
|
@@ -64,7 +64,7 @@ hosted devices.
|
|
| 64 |
```bash
|
| 65 |
qai-hub configure --api_token API_TOKEN
|
| 66 |
```
|
| 67 |
-
Navigate to [docs](https://
|
| 68 |
|
| 69 |
|
| 70 |
|
|
@@ -175,7 +175,7 @@ With the output of the model, you can compute like PSNR, relative errors or
|
|
| 175 |
spot check the output with expected output.
|
| 176 |
|
| 177 |
**Note**: This on-device profiling and inference requires access to Qualcomm®
|
| 178 |
-
AI Hub. [Sign up for access](https://myaccount.qualcomm.com/signup).
|
| 179 |
|
| 180 |
|
| 181 |
|
|
|
|
| 36 |
|
| 37 |
| Model | Precision | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Primary Compute Unit | Target Model
|
| 38 |
|---|---|---|---|---|---|---|---|---|
|
| 39 |
+
| DeformableDETR | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | ONNX | 1200.326 ms | 81 - 309 MB | NPU | [DeformableDETR.onnx.zip](https://huggingface.co/qualcomm/DeformableDETR/blob/main/DeformableDETR.onnx.zip) |
|
| 40 |
+
| DeformableDETR | float | Samsung Galaxy S25 | Snapdragon® 8 Elite For Galaxy Mobile | ONNX | 941.575 ms | 78 - 332 MB | NPU | [DeformableDETR.onnx.zip](https://huggingface.co/qualcomm/DeformableDETR/blob/main/DeformableDETR.onnx.zip) |
|
| 41 |
+
| DeformableDETR | float | Snapdragon 8 Elite Gen 5 QRD | Snapdragon® 8 Elite Gen5 Mobile | ONNX | 825.546 ms | 167 - 416 MB | NPU | [DeformableDETR.onnx.zip](https://huggingface.co/qualcomm/DeformableDETR/blob/main/DeformableDETR.onnx.zip) |
|
| 42 |
+
| DeformableDETR | float | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 1478.475 ms | 132 - 132 MB | NPU | [DeformableDETR.onnx.zip](https://huggingface.co/qualcomm/DeformableDETR/blob/main/DeformableDETR.onnx.zip) |
|
| 43 |
|
| 44 |
|
| 45 |
|
|
|
|
| 54 |
```
|
| 55 |
|
| 56 |
|
| 57 |
+
## Configure Qualcomm® AI Hub Workbench to run this model on a cloud-hosted device
|
| 58 |
|
| 59 |
+
Sign-in to [Qualcomm® AI Hub Workbench](https://workbench.aihub.qualcomm.com/) with your
|
| 60 |
Qualcomm® ID. Once signed in navigate to `Account -> Settings -> API Token`.
|
| 61 |
|
| 62 |
With this API token, you can configure your client to run models on the cloud
|
|
|
|
| 64 |
```bash
|
| 65 |
qai-hub configure --api_token API_TOKEN
|
| 66 |
```
|
| 67 |
+
Navigate to [docs](https://workbench.aihub.qualcomm.com/docs/) for more information.
|
| 68 |
|
| 69 |
|
| 70 |
|
|
|
|
| 175 |
spot check the output with expected output.
|
| 176 |
|
| 177 |
**Note**: This on-device profiling and inference requires access to Qualcomm®
|
| 178 |
+
AI Hub Workbench. [Sign up for access](https://myaccount.qualcomm.com/signup).
|
| 179 |
|
| 180 |
|
| 181 |
|