qaihm-bot commited on
Commit
a878ec2
·
verified ·
1 Parent(s): f180ae2

See https://github.com/quic/ai-hub-models/releases/v0.34.0 for changelog.

Files changed (1) hide show
  1. README.md +175 -22
README.md CHANGED
@@ -19,7 +19,11 @@ BGNet or Boundary-Guided Network, is a model designed for camouflaged object det
19
  This model is an implementation of BGNet found [here](https://github.com/thograce/bgnet).
20
 
21
 
22
- More details on model performance across various devices, can be found [here](https://aihub.qualcomm.com/models/bgnet).
 
 
 
 
23
 
24
  ### Model Details
25
 
@@ -65,6 +69,174 @@ This model is an implementation of BGNet found [here](https://github.com/thograc
65
 
66
 
67
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
68
  ## License
69
  * The license for the original implementation of BGNet can be found
70
  [here](This model's original implementation does not provide a LICENSE.).
@@ -79,26 +251,7 @@ This model is an implementation of BGNet found [here](https://github.com/thograc
79
 
80
 
81
  ## Community
82
- * Join [our AI Hub Slack community](https://qualcomm-ai-hub.slack.com/join/shared_invite/zt-2d5zsmas3-Sj0Q9TzslueCjS31eXG2UA#/shared-invite/email) to collaborate, post questions and learn more about on-device AI.
83
  * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com).
84
 
85
- ## Usage and Limitations
86
-
87
- Model may not be used for or in connection with any of the following applications:
88
-
89
- - Accessing essential private and public services and benefits;
90
- - Administration of justice and democratic processes;
91
- - Assessing or recognizing the emotional state of a person;
92
- - Biometric and biometrics-based systems, including categorization of persons based on sensitive characteristics;
93
- - Education and vocational training;
94
- - Employment and workers management;
95
- - Exploitation of the vulnerabilities of persons resulting in harmful behavior;
96
- - General purpose social scoring;
97
- - Law enforcement;
98
- - Management and operation of critical infrastructure;
99
- - Migration, asylum and border control management;
100
- - Predictive policing;
101
- - Real-time remote biometric identification in public spaces;
102
- - Recommender systems of social media platforms;
103
- - Scraping of facial images (from the internet or otherwise); and/or
104
- - Subliminal manipulation
 
19
  This model is an implementation of BGNet found [here](https://github.com/thograce/bgnet).
20
 
21
 
22
+ This repository provides scripts to run BGNet on Qualcomm® devices.
23
+ More details on model performance across various devices, can be found
24
+ [here](https://aihub.qualcomm.com/models/bgnet).
25
+
26
+ **WARNING**: The model assets are not readily available for download due to licensing restrictions.
27
 
28
  ### Model Details
29
 
 
69
 
70
 
71
 
72
+ ## Installation
73
+
74
+
75
+ Install the package via pip:
76
+ ```bash
77
+ pip install "qai-hub-models[bgnet]"
78
+ ```
79
+
80
+
81
+ ## Configure Qualcomm® AI Hub to run this model on a cloud-hosted device
82
+
83
+ Sign-in to [Qualcomm® AI Hub](https://app.aihub.qualcomm.com/) with your
84
+ Qualcomm® ID. Once signed in navigate to `Account -> Settings -> API Token`.
85
+
86
+ With this API token, you can configure your client to run models on the cloud
87
+ hosted devices.
88
+ ```bash
89
+ qai-hub configure --api_token API_TOKEN
90
+ ```
91
+ Navigate to [docs](https://app.aihub.qualcomm.com/docs/) for more information.
92
+
93
+
94
+
95
+ ## Demo off target
96
+
97
+ The package contains a simple end-to-end demo that downloads pre-trained
98
+ weights and runs this model on a sample input.
99
+
100
+ ```bash
101
+ python -m qai_hub_models.models.bgnet.demo
102
+ ```
103
+
104
+ The above demo runs a reference implementation of pre-processing, model
105
+ inference, and post processing.
106
+
107
+ **NOTE**: If you want running in a Jupyter Notebook or Google Colab like
108
+ environment, please add the following to your cell (instead of the above).
109
+ ```
110
+ %run -m qai_hub_models.models.bgnet.demo
111
+ ```
112
+
113
+
114
+ ### Run model on a cloud-hosted device
115
+
116
+ In addition to the demo, you can also run the model on a cloud-hosted Qualcomm®
117
+ device. This script does the following:
118
+ * Performance check on-device on a cloud-hosted device
119
+ * Downloads compiled assets that can be deployed on-device for Android.
120
+ * Accuracy check between PyTorch and on-device outputs.
121
+
122
+ ```bash
123
+ python -m qai_hub_models.models.bgnet.export
124
+ ```
125
+
126
+
127
+
128
+ ## How does this work?
129
+
130
+ This [export script](https://aihub.qualcomm.com/models/bgnet/qai_hub_models/models/BGNet/export.py)
131
+ leverages [Qualcomm® AI Hub](https://aihub.qualcomm.com/) to optimize, validate, and deploy this model
132
+ on-device. Lets go through each step below in detail:
133
+
134
+ Step 1: **Compile model for on-device deployment**
135
+
136
+ To compile a PyTorch model for on-device deployment, we first trace the model
137
+ in memory using the `jit.trace` and then call the `submit_compile_job` API.
138
+
139
+ ```python
140
+ import torch
141
+
142
+ import qai_hub as hub
143
+ from qai_hub_models.models.bgnet import Model
144
+
145
+ # Load the model
146
+ torch_model = Model.from_pretrained()
147
+
148
+ # Device
149
+ device = hub.Device("Samsung Galaxy S24")
150
+
151
+ # Trace model
152
+ input_shape = torch_model.get_input_spec()
153
+ sample_inputs = torch_model.sample_inputs()
154
+
155
+ pt_model = torch.jit.trace(torch_model, [torch.tensor(data[0]) for _, data in sample_inputs.items()])
156
+
157
+ # Compile model on a specific device
158
+ compile_job = hub.submit_compile_job(
159
+ model=pt_model,
160
+ device=device,
161
+ input_specs=torch_model.get_input_spec(),
162
+ )
163
+
164
+ # Get target model to run on-device
165
+ target_model = compile_job.get_target_model()
166
+
167
+ ```
168
+
169
+
170
+ Step 2: **Performance profiling on cloud-hosted device**
171
+
172
+ After compiling models from step 1. Models can be profiled model on-device using the
173
+ `target_model`. Note that this scripts runs the model on a device automatically
174
+ provisioned in the cloud. Once the job is submitted, you can navigate to a
175
+ provided job URL to view a variety of on-device performance metrics.
176
+ ```python
177
+ profile_job = hub.submit_profile_job(
178
+ model=target_model,
179
+ device=device,
180
+ )
181
+
182
+ ```
183
+
184
+ Step 3: **Verify on-device accuracy**
185
+
186
+ To verify the accuracy of the model on-device, you can run on-device inference
187
+ on sample input data on the same cloud hosted device.
188
+ ```python
189
+ input_data = torch_model.sample_inputs()
190
+ inference_job = hub.submit_inference_job(
191
+ model=target_model,
192
+ device=device,
193
+ inputs=input_data,
194
+ )
195
+ on_device_output = inference_job.download_output_data()
196
+
197
+ ```
198
+ With the output of the model, you can compute like PSNR, relative errors or
199
+ spot check the output with expected output.
200
+
201
+ **Note**: This on-device profiling and inference requires access to Qualcomm®
202
+ AI Hub. [Sign up for access](https://myaccount.qualcomm.com/signup).
203
+
204
+
205
+
206
+ ## Run demo on a cloud-hosted device
207
+
208
+ You can also run the demo on-device.
209
+
210
+ ```bash
211
+ python -m qai_hub_models.models.bgnet.demo --eval-mode on-device
212
+ ```
213
+
214
+ **NOTE**: If you want running in a Jupyter Notebook or Google Colab like
215
+ environment, please add the following to your cell (instead of the above).
216
+ ```
217
+ %run -m qai_hub_models.models.bgnet.demo -- --eval-mode on-device
218
+ ```
219
+
220
+
221
+ ## Deploying compiled model to Android
222
+
223
+
224
+ The models can be deployed using multiple runtimes:
225
+ - TensorFlow Lite (`.tflite` export): [This
226
+ tutorial](https://www.tensorflow.org/lite/android/quickstart) provides a
227
+ guide to deploy the .tflite model in an Android application.
228
+
229
+
230
+ - QNN (`.so` export ): This [sample
231
+ app](https://docs.qualcomm.com/bundle/publicresource/topics/80-63442-50/sample_app.html)
232
+ provides instructions on how to use the `.so` shared library in an Android application.
233
+
234
+
235
+ ## View on Qualcomm® AI Hub
236
+ Get more details on BGNet's performance across various devices [here](https://aihub.qualcomm.com/models/bgnet).
237
+ Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
238
+
239
+
240
  ## License
241
  * The license for the original implementation of BGNet can be found
242
  [here](This model's original implementation does not provide a LICENSE.).
 
251
 
252
 
253
  ## Community
254
+ * Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI.
255
  * For questions or feedback please [reach out to us](mailto:ai-hub-support@qti.qualcomm.com).
256
 
257
+