qaihm-bot commited on
Commit
228987c
·
verified ·
1 Parent(s): 9ba7759

See https://github.com/qualcomm/ai-hub-models/releases/v0.48.0 for changelog.

Files changed (1) hide show
  1. README.md +6 -6
README.md CHANGED
@@ -15,7 +15,7 @@ pipeline_tag: object-detection
15
  Deformable DETR is a machine learning model that can detect objects (trained on COCO dataset).
16
 
17
  This is based on the implementation of DeformableDETR found [here](https://github.com/fundamentalvision/Deformable-DETR).
18
- This repository contains pre-exported model files optimized for Qualcomm® devices. You can use the [Qualcomm® AI Hub Models](https://github.com/quic/ai-hub-models/blob/main/qai_hub_models/models/deformable_detr) library to export with custom configurations. More details on model performance across various devices, can be found [here](#performance-summary).
19
 
20
  Qualcomm AI Hub Models uses [Qualcomm AI Hub Workbench](https://workbench.aihub.qualcomm.com) to compile, profile, and evaluate this model. [Sign up](https://myaccount.qualcomm.com/signup) to run these models on a hosted Qualcomm® device.
21
 
@@ -28,22 +28,22 @@ Below are pre-exported model assets ready for deployment.
28
 
29
  | Runtime | Precision | Chipset | SDK Versions | Download |
30
  |---|---|---|---|---|
31
- | ONNX | float | Universal | QAIRT 2.42, ONNX Runtime 1.24.1 | [Download](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/deformable_detr/releases/v0.47.0/deformable_detr-onnx-float.zip)
32
- | ONNX | w8a16 | Universal | QAIRT 2.42, ONNX Runtime 1.24.1 | [Download](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/deformable_detr/releases/v0.47.0/deformable_detr-onnx-w8a16.zip)
33
 
34
  For more device-specific assets and performance metrics, visit **[DeformableDETR on Qualcomm® AI Hub](https://aihub.qualcomm.com/models/deformable_detr)**.
35
 
36
 
37
  ### Option 2: Export with Custom Configurations
38
 
39
- Use the [Qualcomm® AI Hub Models](https://github.com/quic/ai-hub-models/blob/main/qai_hub_models/models/deformable_detr) Python library to compile and export the model with your own:
40
  - Custom weights (e.g., fine-tuned checkpoints)
41
  - Custom input shapes
42
  - Target device and runtime configurations
43
 
44
  This option is ideal if you need to customize the model beyond the default configuration provided here.
45
 
46
- See our repository for [DeformableDETR on GitHub](https://github.com/quic/ai-hub-models/blob/main/qai_hub_models/models/deformable_detr) for usage instructions.
47
 
48
  ## Model Details
49
 
@@ -58,6 +58,7 @@ See our repository for [DeformableDETR on GitHub](https://github.com/quic/ai-hub
58
  ## Performance Summary
59
  | Model | Runtime | Precision | Chipset | Inference Time (ms) | Peak Memory Range (MB) | Primary Compute Unit
60
  |---|---|---|---|---|---|---
 
61
  | DeformableDETR | ONNX | w8a16 | Snapdragon® X Elite | 2795.423 ms | 90 - 90 MB | NPU
62
  | DeformableDETR | ONNX | w8a16 | Snapdragon® 8 Gen 3 Mobile | 2355.274 ms | 67 - 2032 MB | NPU
63
  | DeformableDETR | ONNX | w8a16 | Qualcomm® QCS6490 | 7549.381 ms | 1052 - 1059 MB | CPU
@@ -66,7 +67,6 @@ See our repository for [DeformableDETR on GitHub](https://github.com/quic/ai-hub
66
  | DeformableDETR | ONNX | w8a16 | Snapdragon® 8 Elite For Galaxy Mobile | 1684.91 ms | 63 - 1330 MB | NPU
67
  | DeformableDETR | ONNX | w8a16 | Snapdragon® 7 Gen 4 Mobile | 3776.96 ms | 1044 - 1067 MB | CPU
68
  | DeformableDETR | ONNX | w8a16 | Snapdragon® 8 Elite Gen 5 Mobile | 1412.598 ms | 65 - 1358 MB | NPU
69
- | DeformableDETR | ONNX | w8a16 | Snapdragon® X2 Elite | 1632.94 ms | 93 - 93 MB | NPU
70
 
71
  ## License
72
  * The license for the original implementation of DeformableDETR can be found
 
15
  Deformable DETR is a machine learning model that can detect objects (trained on COCO dataset).
16
 
17
  This is based on the implementation of DeformableDETR found [here](https://github.com/fundamentalvision/Deformable-DETR).
18
+ This repository contains pre-exported model files optimized for Qualcomm® devices. You can use the [Qualcomm® AI Hub Models](https://github.com/qualcomm/ai-hub-models/blob/main/qai_hub_models/models/deformable_detr) library to export with custom configurations. More details on model performance across various devices, can be found [here](#performance-summary).
19
 
20
  Qualcomm AI Hub Models uses [Qualcomm AI Hub Workbench](https://workbench.aihub.qualcomm.com) to compile, profile, and evaluate this model. [Sign up](https://myaccount.qualcomm.com/signup) to run these models on a hosted Qualcomm® device.
21
 
 
28
 
29
  | Runtime | Precision | Chipset | SDK Versions | Download |
30
  |---|---|---|---|---|
31
+ | ONNX | float | Universal | QAIRT 2.42, ONNX Runtime 1.24.1 | [Download](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/deformable_detr/releases/v0.48.0/deformable_detr-onnx-float.zip)
32
+ | ONNX | w8a16 | Universal | QAIRT 2.42, ONNX Runtime 1.24.1 | [Download](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/deformable_detr/releases/v0.48.0/deformable_detr-onnx-w8a16.zip)
33
 
34
  For more device-specific assets and performance metrics, visit **[DeformableDETR on Qualcomm® AI Hub](https://aihub.qualcomm.com/models/deformable_detr)**.
35
 
36
 
37
  ### Option 2: Export with Custom Configurations
38
 
39
+ Use the [Qualcomm® AI Hub Models](https://github.com/qualcomm/ai-hub-models/blob/main/qai_hub_models/models/deformable_detr) Python library to compile and export the model with your own:
40
  - Custom weights (e.g., fine-tuned checkpoints)
41
  - Custom input shapes
42
  - Target device and runtime configurations
43
 
44
  This option is ideal if you need to customize the model beyond the default configuration provided here.
45
 
46
+ See our repository for [DeformableDETR on GitHub](https://github.com/qualcomm/ai-hub-models/blob/main/qai_hub_models/models/deformable_detr) for usage instructions.
47
 
48
  ## Model Details
49
 
 
58
  ## Performance Summary
59
  | Model | Runtime | Precision | Chipset | Inference Time (ms) | Peak Memory Range (MB) | Primary Compute Unit
60
  |---|---|---|---|---|---|---
61
+ | DeformableDETR | ONNX | w8a16 | Snapdragon® X2 Elite | 1632.94 ms | 93 - 93 MB | NPU
62
  | DeformableDETR | ONNX | w8a16 | Snapdragon® X Elite | 2795.423 ms | 90 - 90 MB | NPU
63
  | DeformableDETR | ONNX | w8a16 | Snapdragon® 8 Gen 3 Mobile | 2355.274 ms | 67 - 2032 MB | NPU
64
  | DeformableDETR | ONNX | w8a16 | Qualcomm® QCS6490 | 7549.381 ms | 1052 - 1059 MB | CPU
 
67
  | DeformableDETR | ONNX | w8a16 | Snapdragon® 8 Elite For Galaxy Mobile | 1684.91 ms | 63 - 1330 MB | NPU
68
  | DeformableDETR | ONNX | w8a16 | Snapdragon® 7 Gen 4 Mobile | 3776.96 ms | 1044 - 1067 MB | CPU
69
  | DeformableDETR | ONNX | w8a16 | Snapdragon® 8 Elite Gen 5 Mobile | 1412.598 ms | 65 - 1358 MB | NPU
 
70
 
71
  ## License
72
  * The license for the original implementation of DeformableDETR can be found