qaihm-bot commited on
Commit
ef61051
·
verified ·
1 Parent(s): ee38cc6

See https://github.com/quic/ai-hub-models/releases/v0.46.1 for changelog.

FastSam-S_float.dlc DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:d625331326e88f56aaf1007d26c27b5836b84ef5274dc288414065549b7b953e
3
- size 47481076
 
 
 
 
FastSam-S_float.onnx.zip DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:cce2f263b53f5ffd0869e4476a6db8a9de6011e33bd6ee2f4a7c6688ef8ecd74
3
- size 39846309
 
 
 
 
FastSam-S_float.tflite DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:9b938aa01829678e4d357b4cd4119f089d28bcc48474e483bb2e6912681f634e
3
- size 47286456
 
 
 
 
README.md CHANGED
@@ -9,246 +9,92 @@ pipeline_tag: image-segmentation
9
 
10
  ![](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/fastsam_s/web-assets/model_demo.png)
11
 
12
- # FastSam-S: Optimized for Mobile Deployment
13
- ## Generate high quality segmentation mask on device
14
-
15
 
16
  The Fast Segment Anything Model (FastSAM) is a novel, real-time CNN-based solution for the Segment Anything task. This task is designed to segment any object within an image based on various possible user interaction prompts. The model performs competitively despite significantly reduced computation, making it a practical choice for a variety of vision tasks.
17
 
18
- This model is an implementation of FastSam-S found [here](https://github.com/CASIA-IVA-Lab/FastSAM).
19
-
20
-
21
- This repository provides scripts to run FastSam-S on Qualcomm® devices.
22
- More details on model performance across various devices, can be found
23
- [here](https://aihub.qualcomm.com/models/fastsam_s).
24
-
25
-
26
-
27
- ### Model Details
28
-
29
- - **Model Type:** Model_use_case.semantic_segmentation
30
- - **Model Stats:**
31
- - Model checkpoint: fastsam-s.pt
32
- - Inference latency: RealTime
33
- - Input resolution: 640x640
34
- - Number of parameters: 11.8M
35
- - Model size (float): 45.1 MB
36
-
37
- | Model | Precision | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Primary Compute Unit | Target Model
38
- |---|---|---|---|---|---|---|---|---|
39
- | FastSam-S | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | TFLITE | 37.745 ms | 4 - 243 MB | NPU | [FastSam-S.tflite](https://huggingface.co/qualcomm/FastSam-S/blob/main/FastSam-S.tflite) |
40
- | FastSam-S | float | QCS8275 (Proxy) | Qualcomm® QCS8275 (Proxy) | QNN_DLC | 37.787 ms | 5 - 232 MB | NPU | [FastSam-S.dlc](https://huggingface.co/qualcomm/FastSam-S/blob/main/FastSam-S.dlc) |
41
- | FastSam-S | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | TFLITE | 16.036 ms | 4 - 215 MB | NPU | [FastSam-S.tflite](https://huggingface.co/qualcomm/FastSam-S/blob/main/FastSam-S.tflite) |
42
- | FastSam-S | float | QCS8450 (Proxy) | Qualcomm® QCS8450 (Proxy) | QNN_DLC | 16.91 ms | 5 - 199 MB | NPU | [FastSam-S.dlc](https://huggingface.co/qualcomm/FastSam-S/blob/main/FastSam-S.dlc) |
43
- | FastSam-S | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | TFLITE | 6.889 ms | 4 - 7 MB | NPU | [FastSam-S.tflite](https://huggingface.co/qualcomm/FastSam-S/blob/main/FastSam-S.tflite) |
44
- | FastSam-S | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | QNN_DLC | 6.889 ms | 5 - 7 MB | NPU | [FastSam-S.dlc](https://huggingface.co/qualcomm/FastSam-S/blob/main/FastSam-S.dlc) |
45
- | FastSam-S | float | QCS8550 (Proxy) | Qualcomm® QCS8550 (Proxy) | ONNX | 8.287 ms | 0 - 26 MB | NPU | [FastSam-S.onnx.zip](https://huggingface.co/qualcomm/FastSam-S/blob/main/FastSam-S.onnx.zip) |
46
- | FastSam-S | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | TFLITE | 10.782 ms | 4 - 232 MB | NPU | [FastSam-S.tflite](https://huggingface.co/qualcomm/FastSam-S/blob/main/FastSam-S.tflite) |
47
- | FastSam-S | float | QCS9075 (Proxy) | Qualcomm® QCS9075 (Proxy) | QNN_DLC | 10.761 ms | 1 - 233 MB | NPU | [FastSam-S.dlc](https://huggingface.co/qualcomm/FastSam-S/blob/main/FastSam-S.dlc) |
48
- | FastSam-S | float | SA7255P ADP | Qualcomm® SA7255P | TFLITE | 37.745 ms | 4 - 243 MB | NPU | [FastSam-S.tflite](https://huggingface.co/qualcomm/FastSam-S/blob/main/FastSam-S.tflite) |
49
- | FastSam-S | float | SA7255P ADP | Qualcomm® SA7255P | QNN_DLC | 37.787 ms | 5 - 232 MB | NPU | [FastSam-S.dlc](https://huggingface.co/qualcomm/FastSam-S/blob/main/FastSam-S.dlc) |
50
- | FastSam-S | float | SA8295P ADP | Qualcomm® SA8295P | TFLITE | 12.913 ms | 4 - 179 MB | NPU | [FastSam-S.tflite](https://huggingface.co/qualcomm/FastSam-S/blob/main/FastSam-S.tflite) |
51
- | FastSam-S | float | SA8295P ADP | Qualcomm® SA8295P | QNN_DLC | 12.842 ms | 0 - 162 MB | NPU | [FastSam-S.dlc](https://huggingface.co/qualcomm/FastSam-S/blob/main/FastSam-S.dlc) |
52
- | FastSam-S | float | SA8775P ADP | Qualcomm® SA8775P | TFLITE | 10.782 ms | 4 - 232 MB | NPU | [FastSam-S.tflite](https://huggingface.co/qualcomm/FastSam-S/blob/main/FastSam-S.tflite) |
53
- | FastSam-S | float | SA8775P ADP | Qualcomm® SA8775P | QNN_DLC | 10.761 ms | 1 - 233 MB | NPU | [FastSam-S.dlc](https://huggingface.co/qualcomm/FastSam-S/blob/main/FastSam-S.dlc) |
54
- | FastSam-S | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | TFLITE | 5.165 ms | 4 - 402 MB | NPU | [FastSam-S.tflite](https://huggingface.co/qualcomm/FastSam-S/blob/main/FastSam-S.tflite) |
55
- | FastSam-S | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | QNN_DLC | 5.183 ms | 5 - 382 MB | NPU | [FastSam-S.dlc](https://huggingface.co/qualcomm/FastSam-S/blob/main/FastSam-S.dlc) |
56
- | FastSam-S | float | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 Mobile | ONNX | 6.031 ms | 16 - 219 MB | NPU | [FastSam-S.onnx.zip](https://huggingface.co/qualcomm/FastSam-S/blob/main/FastSam-S.onnx.zip) |
57
- | FastSam-S | float | Samsung Galaxy S25 | Snapdragon® 8 Elite For Galaxy Mobile | TFLITE | 3.803 ms | 0 - 210 MB | NPU | [FastSam-S.tflite](https://huggingface.co/qualcomm/FastSam-S/blob/main/FastSam-S.tflite) |
58
- | FastSam-S | float | Samsung Galaxy S25 | Snapdragon® 8 Elite For Galaxy Mobile | QNN_DLC | 3.846 ms | 5 - 209 MB | NPU | [FastSam-S.dlc](https://huggingface.co/qualcomm/FastSam-S/blob/main/FastSam-S.dlc) |
59
- | FastSam-S | float | Samsung Galaxy S25 | Snapdragon® 8 Elite For Galaxy Mobile | ONNX | 4.81 ms | 12 - 184 MB | NPU | [FastSam-S.onnx.zip](https://huggingface.co/qualcomm/FastSam-S/blob/main/FastSam-S.onnx.zip) |
60
- | FastSam-S | float | Snapdragon 8 Elite Gen 5 QRD | Snapdragon® 8 Elite Gen 5 Mobile | TFLITE | 2.924 ms | 0 - 221 MB | NPU | [FastSam-S.tflite](https://huggingface.co/qualcomm/FastSam-S/blob/main/FastSam-S.tflite) |
61
- | FastSam-S | float | Snapdragon 8 Elite Gen 5 QRD | Snapdragon® 8 Elite Gen 5 Mobile | QNN_DLC | 2.97 ms | 5 - 203 MB | NPU | [FastSam-S.dlc](https://huggingface.co/qualcomm/FastSam-S/blob/main/FastSam-S.dlc) |
62
- | FastSam-S | float | Snapdragon 8 Elite Gen 5 QRD | Snapdragon® 8 Elite Gen 5 Mobile | ONNX | 3.6 ms | 2 - 155 MB | NPU | [FastSam-S.onnx.zip](https://huggingface.co/qualcomm/FastSam-S/blob/main/FastSam-S.onnx.zip) |
63
- | FastSam-S | float | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN_DLC | 7.408 ms | 5 - 5 MB | NPU | [FastSam-S.dlc](https://huggingface.co/qualcomm/FastSam-S/blob/main/FastSam-S.dlc) |
64
- | FastSam-S | float | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 8.534 ms | 19 - 19 MB | NPU | [FastSam-S.onnx.zip](https://huggingface.co/qualcomm/FastSam-S/blob/main/FastSam-S.onnx.zip) |
65
-
66
-
67
-
68
-
69
- ## Installation
70
-
71
-
72
- Install the package via pip:
73
- ```bash
74
- # NOTE: 3.10 <= PYTHON_VERSION < 3.14 is supported.
75
- pip install "qai-hub-models[fastsam-s]"
76
- ```
77
-
78
-
79
- ## Configure Qualcomm® AI Hub Workbench to run this model on a cloud-hosted device
80
-
81
- Sign-in to [Qualcomm® AI Hub Workbench](https://workbench.aihub.qualcomm.com/) with your
82
- Qualcomm® ID. Once signed in navigate to `Account -> Settings -> API Token`.
83
-
84
- With this API token, you can configure your client to run models on the cloud
85
- hosted devices.
86
- ```bash
87
- qai-hub configure --api_token API_TOKEN
88
- ```
89
- Navigate to [docs](https://workbench.aihub.qualcomm.com/docs/) for more information.
90
-
91
-
92
-
93
- ## Demo off target
94
-
95
- The package contains a simple end-to-end demo that downloads pre-trained
96
- weights and runs this model on a sample input.
97
-
98
- ```bash
99
- python -m qai_hub_models.models.fastsam_s.demo
100
- ```
101
-
102
- The above demo runs a reference implementation of pre-processing, model
103
- inference, and post processing.
104
-
105
- **NOTE**: If you want running in a Jupyter Notebook or Google Colab like
106
- environment, please add the following to your cell (instead of the above).
107
- ```
108
- %run -m qai_hub_models.models.fastsam_s.demo
109
- ```
110
-
111
-
112
- ### Run model on a cloud-hosted device
113
-
114
- In addition to the demo, you can also run the model on a cloud-hosted Qualcomm®
115
- device. This script does the following:
116
- * Performance check on-device on a cloud-hosted device
117
- * Downloads compiled assets that can be deployed on-device for Android.
118
- * Accuracy check between PyTorch and on-device outputs.
119
-
120
- ```bash
121
- python -m qai_hub_models.models.fastsam_s.export
122
- ```
123
-
124
-
125
-
126
- ## How does this work?
127
-
128
- This [export script](https://aihub.qualcomm.com/models/fastsam_s/qai_hub_models/models/FastSam-S/export.py)
129
- leverages [Qualcomm® AI Hub](https://aihub.qualcomm.com/) to optimize, validate, and deploy this model
130
- on-device. Lets go through each step below in detail:
131
-
132
- Step 1: **Compile model for on-device deployment**
133
-
134
- To compile a PyTorch model for on-device deployment, we first trace the model
135
- in memory using the `jit.trace` and then call the `submit_compile_job` API.
136
-
137
- ```python
138
- import torch
139
-
140
- import qai_hub as hub
141
- from qai_hub_models.models.fastsam_s import Model
142
-
143
- # Load the model
144
- torch_model = Model.from_pretrained()
145
-
146
- # Device
147
- device = hub.Device("Samsung Galaxy S25")
148
-
149
- # Trace model
150
- input_shape = torch_model.get_input_spec()
151
- sample_inputs = torch_model.sample_inputs()
152
-
153
- pt_model = torch.jit.trace(torch_model, [torch.tensor(data[0]) for _, data in sample_inputs.items()])
154
-
155
- # Compile model on a specific device
156
- compile_job = hub.submit_compile_job(
157
- model=pt_model,
158
- device=device,
159
- input_specs=torch_model.get_input_spec(),
160
- )
161
-
162
- # Get target model to run on-device
163
- target_model = compile_job.get_target_model()
164
-
165
- ```
166
-
167
-
168
- Step 2: **Performance profiling on cloud-hosted device**
169
-
170
- After compiling models from step 1. Models can be profiled model on-device using the
171
- `target_model`. Note that this scripts runs the model on a device automatically
172
- provisioned in the cloud. Once the job is submitted, you can navigate to a
173
- provided job URL to view a variety of on-device performance metrics.
174
- ```python
175
- profile_job = hub.submit_profile_job(
176
- model=target_model,
177
- device=device,
178
- )
179
-
180
- ```
181
-
182
- Step 3: **Verify on-device accuracy**
183
-
184
- To verify the accuracy of the model on-device, you can run on-device inference
185
- on sample input data on the same cloud hosted device.
186
- ```python
187
- input_data = torch_model.sample_inputs()
188
- inference_job = hub.submit_inference_job(
189
- model=target_model,
190
- device=device,
191
- inputs=input_data,
192
- )
193
- on_device_output = inference_job.download_output_data()
194
-
195
- ```
196
- With the output of the model, you can compute like PSNR, relative errors or
197
- spot check the output with expected output.
198
-
199
- **Note**: This on-device profiling and inference requires access to Qualcomm®
200
- AI Hub Workbench. [Sign up for access](https://myaccount.qualcomm.com/signup).
201
-
202
-
203
-
204
- ## Run demo on a cloud-hosted device
205
-
206
- You can also run the demo on-device.
207
-
208
- ```bash
209
- python -m qai_hub_models.models.fastsam_s.demo --eval-mode on-device
210
- ```
211
-
212
- **NOTE**: If you want running in a Jupyter Notebook or Google Colab like
213
- environment, please add the following to your cell (instead of the above).
214
- ```
215
- %run -m qai_hub_models.models.fastsam_s.demo -- --eval-mode on-device
216
- ```
217
-
218
-
219
- ## Deploying compiled model to Android
220
-
221
-
222
- The models can be deployed using multiple runtimes:
223
- - TensorFlow Lite (`.tflite` export): [This
224
- tutorial](https://www.tensorflow.org/lite/android/quickstart) provides a
225
- guide to deploy the .tflite model in an Android application.
226
-
227
-
228
- - QNN (`.so` export ): This [sample
229
- app](https://docs.qualcomm.com/bundle/publicresource/topics/80-63442-50/sample_app.html)
230
- provides instructions on how to use the `.so` shared library in an Android application.
231
-
232
-
233
- ## View on Qualcomm® AI Hub
234
- Get more details on FastSam-S's performance across various devices [here](https://aihub.qualcomm.com/models/fastsam_s).
235
- Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
236
-
237
 
238
  ## License
239
  * The license for the original implementation of FastSam-S can be found
240
  [here](https://github.com/CASIA-IVA-Lab/FastSAM/blob/main/LICENSE).
241
 
242
-
243
-
244
  ## References
245
  * [Fast Segment Anything](https://arxiv.org/abs/2306.12156)
246
  * [Source Model Implementation](https://github.com/CASIA-IVA-Lab/FastSAM)
247
 
248
-
249
-
250
  ## Community
251
  * Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI.
252
  * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com).
253
-
254
-
 
9
 
10
  ![](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/fastsam_s/web-assets/model_demo.png)
11
 
12
+ # FastSam-S: Optimized for Qualcomm Devices
 
 
13
 
14
  The Fast Segment Anything Model (FastSAM) is a novel, real-time CNN-based solution for the Segment Anything task. This task is designed to segment any object within an image based on various possible user interaction prompts. The model performs competitively despite significantly reduced computation, making it a practical choice for a variety of vision tasks.
15
 
16
+ This is based on the implementation of FastSam-S found [here](https://github.com/CASIA-IVA-Lab/FastSAM).
17
+ This repository contains pre-exported model files optimized for Qualcomm® devices. You can use the [Qualcomm® AI Hub Models](https://github.com/quic/ai-hub-models/blob/main/qai_hub_models/models/fastsam_s) library to export with custom configurations. More details on model performance across various devices, can be found [here](#performance-summary).
18
+
19
+ Qualcomm AI Hub Models uses [Qualcomm AI Hub Workbench](https://workbench.aihub.qualcomm.com) to compile, profile, and evaluate this model. [Sign up](https://myaccount.qualcomm.com/signup) to run these models on a hosted Qualcomm® device.
20
+
21
+ ## Getting Started
22
+ There are two ways to deploy this model on your device:
23
+
24
+ ### Option 1: Download Pre-Exported Models
25
+
26
+ Below are pre-exported model assets ready for deployment.
27
+
28
+ | Runtime | Precision | Chipset | SDK Versions | Download |
29
+ |---|---|---|---|---|
30
+ | ONNX | float | Universal | QAIRT 2.37, ONNX Runtime 1.23.0 | [Download](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/fastsam_s/releases/v0.46.1/fastsam_s-onnx-float.zip)
31
+ | QNN_DLC | float | Universal | QAIRT 2.42 | [Download](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/fastsam_s/releases/v0.46.1/fastsam_s-qnn_dlc-float.zip)
32
+ | TFLITE | float | Universal | QAIRT 2.42, TFLite 2.17.0 | [Download](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/fastsam_s/releases/v0.46.1/fastsam_s-tflite-float.zip)
33
+
34
+ For more device-specific assets and performance metrics, visit **[FastSam-S on Qualcomm® AI Hub](https://aihub.qualcomm.com/models/fastsam_s)**.
35
+
36
+
37
+ ### Option 2: Export with Custom Configurations
38
+
39
+ Use the [Qualcomm® AI Hub Models](https://github.com/quic/ai-hub-models/blob/main/qai_hub_models/models/fastsam_s) Python library to compile and export the model with your own:
40
+ - Custom weights (e.g., fine-tuned checkpoints)
41
+ - Custom input shapes
42
+ - Target device and runtime configurations
43
+
44
+ This option is ideal if you need to customize the model beyond the default configuration provided here.
45
+
46
+ See our repository for [FastSam-S on GitHub](https://github.com/quic/ai-hub-models/blob/main/qai_hub_models/models/fastsam_s) for usage instructions.
47
+
48
+ ## Model Details
49
+
50
+ **Model Type:** Model_use_case.semantic_segmentation
51
+
52
+ **Model Stats:**
53
+ - Model checkpoint: fastsam-s.pt
54
+ - Inference latency: RealTime
55
+ - Input resolution: 640x640
56
+ - Number of parameters: 11.8M
57
+ - Model size (float): 45.1 MB
58
+
59
+ ## Performance Summary
60
+ | Model | Runtime | Precision | Chipset | Inference Time (ms) | Peak Memory Range (MB) | Primary Compute Unit
61
+ |---|---|---|---|---|---|---
62
+ | FastSam-S | ONNX | float | Snapdragon® X Elite | 8.598 ms | 20 - 20 MB | NPU
63
+ | FastSam-S | ONNX | float | Snapdragon® 8 Gen 3 Mobile | 6.1 ms | 22 - 224 MB | NPU
64
+ | FastSam-S | ONNX | float | Qualcomm® QCS8550 (Proxy) | 8.371 ms | 0 - 38 MB | NPU
65
+ | FastSam-S | ONNX | float | Qualcomm® QCS9075 | 13.529 ms | 12 - 15 MB | NPU
66
+ | FastSam-S | ONNX | float | Snapdragon® 8 Elite For Galaxy Mobile | 4.789 ms | 12 - 177 MB | NPU
67
+ | FastSam-S | ONNX | float | Snapdragon® 8 Elite Gen 5 Mobile | 3.561 ms | 2 - 150 MB | NPU
68
+ | FastSam-S | QNN_DLC | float | Snapdragon® X Elite | 8.011 ms | 5 - 5 MB | NPU
69
+ | FastSam-S | QNN_DLC | float | Snapdragon® 8 Gen 3 Mobile | 5.621 ms | 0 - 255 MB | NPU
70
+ | FastSam-S | QNN_DLC | float | Qualcomm® QCS8275 (Proxy) | 38.556 ms | 1 - 200 MB | NPU
71
+ | FastSam-S | QNN_DLC | float | Qualcomm® QCS8550 (Proxy) | 7.446 ms | 5 - 10 MB | NPU
72
+ | FastSam-S | QNN_DLC | float | Qualcomm® SA8775P | 11.282 ms | 1 - 207 MB | NPU
73
+ | FastSam-S | QNN_DLC | float | Qualcomm® QCS9075 | 10.872 ms | 7 - 17 MB | NPU
74
+ | FastSam-S | QNN_DLC | float | Qualcomm® QCS8450 (Proxy) | 15.055 ms | 5 - 212 MB | NPU
75
+ | FastSam-S | QNN_DLC | float | Qualcomm® SA7255P | 38.556 ms | 1 - 200 MB | NPU
76
+ | FastSam-S | QNN_DLC | float | Qualcomm® SA8295P | 13.778 ms | 0 - 176 MB | NPU
77
+ | FastSam-S | QNN_DLC | float | Snapdragon® 8 Elite For Galaxy Mobile | 4.334 ms | 0 - 202 MB | NPU
78
+ | FastSam-S | QNN_DLC | float | Snapdragon® 8 Elite Gen 5 Mobile | 3.137 ms | 5 - 201 MB | NPU
79
+ | FastSam-S | TFLITE | float | Snapdragon® 8 Gen 3 Mobile | 5.159 ms | 3 - 174 MB | NPU
80
+ | FastSam-S | TFLITE | float | Qualcomm® QCS8275 (Proxy) | 37.71 ms | 4 - 115 MB | NPU
81
+ | FastSam-S | TFLITE | float | Qualcomm® QCS8550 (Proxy) | 6.862 ms | 4 - 33 MB | NPU
82
+ | FastSam-S | TFLITE | float | Qualcomm® SA8775P | 10.644 ms | 4 - 120 MB | NPU
83
+ | FastSam-S | TFLITE | float | Qualcomm® QCS9075 | 10.565 ms | 4 - 39 MB | NPU
84
+ | FastSam-S | TFLITE | float | Qualcomm® QCS8450 (Proxy) | 14.079 ms | 4 - 231 MB | NPU
85
+ | FastSam-S | TFLITE | float | Qualcomm® SA7255P | 37.71 ms | 4 - 115 MB | NPU
86
+ | FastSam-S | TFLITE | float | Qualcomm® SA8295P | 12.955 ms | 4 - 195 MB | NPU
87
+ | FastSam-S | TFLITE | float | Snapdragon® 8 Elite For Galaxy Mobile | 3.893 ms | 0 - 111 MB | NPU
88
+ | FastSam-S | TFLITE | float | Snapdragon® 8 Elite Gen 5 Mobile | 2.881 ms | 0 - 208 MB | NPU
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
89
 
90
  ## License
91
  * The license for the original implementation of FastSam-S can be found
92
  [here](https://github.com/CASIA-IVA-Lab/FastSAM/blob/main/LICENSE).
93
 
 
 
94
  ## References
95
  * [Fast Segment Anything](https://arxiv.org/abs/2306.12156)
96
  * [Source Model Implementation](https://github.com/CASIA-IVA-Lab/FastSAM)
97
 
 
 
98
  ## Community
99
  * Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI.
100
  * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com).
 
 
tool-versions.yaml DELETED
@@ -1,4 +0,0 @@
1
- tool_versions:
2
- onnx:
3
- qairt: 2.37.1.250807093845_124904
4
- onnx_runtime: 1.23.0