dgarrett-synaptics commited on
Commit
cde70f6
·
verified ·
1 Parent(s): 315f47f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -399
README.md CHANGED
@@ -1,5 +1,5 @@
1
  ---
2
- title: SRSDK Model Compiler
3
  emoji: 🏃
4
  colorFrom: purple
5
  colorTo: pink
@@ -12,402 +12,5 @@ hf_oauth_scopes:
12
  - read-repos
13
  - write-repos
14
  - manage-repos
15
- short_description: Helps build TFlite models to integrate into the SRSDK
16
- ---
17
-
18
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
19
-
20
-
21
- * Introduction
22
-
23
- The SyNet repository is a library for developing networks for
24
- Synaptics vision chips. It consists of PyTorch model components
25
- which can be exported to TensorFlow and tflite without an
26
- intermediate export backend like ONNX. The resulting tflite files
27
- are clean, and respect chip memory constraints. The aim of SyNet is
28
- to streamline the process of generating trained models to deploy on
29
- Synaptics chips, for internal and external use.
30
-
31
- In addition to model definitions, analysis, data manipulation, and
32
- data analysis tools, SyNet also aims to support several "backends".
33
- For now, the only backend supported in Ultralytics. Ultralytics
34
- provides a suite of tools for training vision models and visualizing
35
- those models. Using Ultralytics as a backend, you can generate
36
- trained vision models optimized for our chips.
37
-
38
- This code is GPL, and the Ultralytics backend is AGPL, but the
39
- output of this code are tflite files. These output tflite files are
40
- data that is not covered by the copyright on the code of either code
41
- base. Consequently, the respective licenses of the code do not
42
- apply to the output tflite files.
43
-
44
- * Performance
45
-
46
- The following table summarizes model performance on the person class
47
- of the COCO dataset for four major computer vision tasks. All of
48
- these models run on VGA (640x480) resolution at about 10fps on the
49
- Sabre A0 chip.
50
-
51
- | Task | Score | Metric | Data |
52
- |-----------------------+-------+---------------+------------------------------------------------------|
53
- | Classification | 0.945 | Top1 accuracy | Person Visual Wake Words with standard minival split |
54
- | Object Detection | 0.730 | Box AP50 | COCO detection subset used by Ultralytics |
55
- | Pose Estimation | 0.729 | Pose AP50 | COCO keypoint subset used by Ultralytics |
56
- | Instance Segmentation | 0.631 | Mask AP50 | COCO segmentation subset used by Ultralytics |
57
-
58
- * Roadmap
59
-
60
- ** Current Features
61
-
62
- - Models optimized for Sabre
63
- - Memory and compute efficient model components
64
- - Usable with PyTorch and TensorFlow training libraries
65
- - Usable from the command line or python environment
66
- - In-model demosaicing export option
67
- - faster than demosaic hardware block
68
- - increases available weight memory by >2x
69
- - Includes slim tflite runtime utilities (for demos)
70
- - Has the ability to support training backends
71
- - Main backend is Ultralytics
72
- - Currently supports all core visions tasks from Ultralytics
73
- - Object Detection
74
- - Pose Estimation
75
- - Instance Segmintation
76
- - Classification
77
- - Supports evaluating tflites through Ultralytics
78
- - Allows for easy setup of laptop demos
79
- - quickly run and view model running on webcam
80
- - Includes more advanced custom tflite evaluation (not through
81
- Ultralytics).
82
- - Computes combined and per-dataset statistics of model
83
- performance.
84
-
85
- ** Planned for later releases
86
-
87
- - Models optimized for VS680
88
- - Automatic model selection from zoo
89
- - Select training resolution, inference resolution, and heads.
90
- - Dataset manipulation tools
91
- - subsample and combine classes for embedded application
92
- - camera augmentations
93
- - Enable arbitrary addition of box attributes to regress
94
- - age, orientation (pitch, yaw, roll)
95
- - Complete miscelaneous tasks (see corresponding Gitlab milestone)
96
- - Mixed precision support
97
-
98
- ** Future Research
99
-
100
- - Hugging Face backend integration
101
-
102
- * Installation
103
-
104
- In more complex setups, you should create a virtual environment:
105
- https://docs.python.org/3/library/venv.html. In the following
106
- examples, we include the Ultralytics backend by adding '[ultra]'.
107
- To install via pip:
108
-
109
- #+begin_src shell
110
- pip install "synet[ultra] @ git+ssh://git@gitlab.synaptics.com/wssd-ai-algorithms/synet-fork.git"
111
- #+end_src
112
-
113
- or if you have cloned to a local copy of the repository:
114
-
115
- #+begin_src shell
116
- pip install [-e] "/PATH/TO/LOCAL/SYNET[ultra]"
117
- #+end_src
118
-
119
- where '-e' will allow you to make edits to your local clone after
120
- install. If the install fails in any way, please report this to us.
121
- In this case, you can use the exact library versions used to produce
122
- the results in the Performance section above using the following
123
- (requires CUDA 11.8)
124
-
125
- #+begin_src shell
126
- pip install -r /PATH/TO/LOCAL/SYNET/requirements-11.8.txt "/PATH/TO/LOCAL/SYNET[ultra]"
127
- #+end_src
128
-
129
- * Quickstart
130
-
131
- In the following, I give a simple example of how one can train a model on COCO, quantize to tflite, and benchmark that tflite on a custom dataset specified by a user's CUSTOM_DATA.yaml. First, train the model:
132
- #+begin_src shell
133
- synet ultralytics train model=sabre-detect-vga.yaml data=coco.yaml
134
- #+end_src
135
- Quantize the trained model.
136
- #+begin_src shell
137
- synet quantize --backend ultralytics --tflite runs/train/detect/weights/best.pt --data /path/to/coco.yaml
138
- #+end_src
139
- Evaluate that trained and quantized model.
140
- #+begin_src shell
141
- synet ultralytics val model=runs/train/detect/weights/best.tflite task=detect data=coco.yaml
142
- #+end_src
143
- If you have a custom evaluation dataset, you can evaluate on that (e.g. test split) as well
144
- #+begin_src shell
145
- synet ultralytics val model=runs/train/detect/weights/best.tflite split=test task=detect save_txt=True save_conf=True data=CUSTOM_DATA.yaml
146
- #+end_src
147
- And finally generate metrics for the model performance, especially at the .95 precision operating point.
148
- #+begin_src shell
149
- synet metrics CUSTOM_DATA.YAML --out-dirs runs/detect/val --project runs/detect/val --precision .95
150
- #+end_src
151
-
152
- * Core Shell API
153
-
154
- The basic syntax for running SyNet from a shell is:
155
-
156
- #+begin_src shell
157
- synet [entrypoint] [entrypoint specific args]
158
- #+end_src
159
-
160
- Where entrypoint can be a native SyNet module, or a backend like
161
- ultralytics. For instance:
162
-
163
- #+begin_src shell
164
- synet ultralytics train ...
165
- synet quantize --backend ultralytics ...
166
- #+end_src
167
-
168
- Notice that while some backends are callable this way, the backend
169
- may also need to be specified for other modules. For instance,
170
- synet.quantize needs to know with which backend to load the model.
171
-
172
- For information on training/visualizing models, see the section on
173
- backends below.
174
-
175
- ** Quantize
176
-
177
- The SyNet repository includes the ability to quantize models
178
-
179
- #+begin_src shell
180
- synet quantize --backend BACKEND --weights MODEL_PT_SAVE --data REP_DATA
181
- #+end_src
182
-
183
- For instance, running:
184
-
185
- #+begin_src shell
186
- synet quantize --backend ultralytics --weights ./exp/weights/best.pt --data /PATH/TO/CUSTOM_DATASET.YAML --image-shape 480 640
187
- #+end_src
188
-
189
- will create a tflite at ./exp/weights/best.tflite with input shape
190
- [480, 640]. The image shape will default to whatever the model is
191
- designed to take, but can be overrided in this way. You may also
192
- specify a model yaml like so:
193
-
194
- #+begin_src shell
195
- synet quantize --backend ultralytics --cfg sabre-detect-qvga.yaml
196
- #+end_src
197
-
198
- This will place a quantized model at ./sabre-detect-qvga.tflite.
199
- This will let you inspect the architecture, though it will not be a
200
- trained model, so the model output will be useless. For more
201
- information see:
202
-
203
- #+begin_src shell
204
- synet quantize --help
205
- #+end_src
206
-
207
- ** Metrics
208
-
209
- SyNet's metrics code is an advanced model benchmarking tool which
210
- allows the user to simultaneously score object detection on
211
- multiple datasets. The benefit of doing multiple datasets is that
212
- it can find a confidence threshold by applying a precision
213
- threshold to the combined data. This global operating point is
214
- then applied to each dataset individually. Plots are generated
215
- showing the mAP curves for each class, each dataset, the combined
216
- dataset, and combined classes. Additionally, on each curve, the
217
- global precision point, the dataset precision point, and the .5
218
- confidence point are plotted. The exact coordinates and
219
- confidences of each point are printed. The basic usage is:
220
-
221
- #+begin_src shell
222
- synet metrics DATA1.YAML DATA2.YAML... --out-dirs OUT_AIR1 OUT_DIR2... --project PLOT_DIR --precisions PRECISION...
223
- #+end_src
224
-
225
- There must be one data yaml for each dataset, and they are expected
226
- to be in Ultralytics format:
227
- https://docs.ultralytics.com/datasets/?h=data#steps-to-contribute-a-new-dataset
228
-
229
- If present, the 'test' data split is used. Otherwise, the 'val'
230
- split is used for each dataset. The metrics code does not actually
231
- run the model, but instead uses the output from running the model
232
- via a different code, hence the "OUT_DIR" is the output directory
233
- of that other code. This may be changed in the future, but
234
- currently you should populate the out dir with the only supported
235
- backend:
236
-
237
- #+begin_src shell
238
- synet ultralytics val model=/PATH/TO/BEST.TFLITE split=test imgsz=HEIGHT,WIDTH data=DATA1.YAML task=detect save_txt=True save_conf=True
239
- #+end_src
240
-
241
- See notes on validation in the ultralytics backend section below.
242
- For more information on the metrics code see:
243
-
244
- #+begin_src shell
245
- synet metrics --help
246
- #+end_src
247
-
248
- * Core Python API
249
-
250
- ** Base Layers
251
-
252
- *** Converting to Keras/TensorFlow
253
-
254
- SyNet exists to be the glue between State of the Art training, and
255
- our chips. Each model component knows how to "export itself" to a
256
- Keras/TensorFlow model. This done approximately like so:
257
-
258
- #+begin_src python
259
- from keras import Input, Model
260
- from synet.base import askeras
261
- model = ...
262
- inp = Input(...)
263
- with askeras:
264
- kmodel = Model(inp, model(inp))
265
- #+end_src
266
-
267
- This method works so long as only SyNet blocks operate directly on
268
- the input. For a more complex example, see quantize.py.
269
-
270
- * Backends
271
-
272
- For now, the only backend supported is Ultralytics.
273
-
274
- ** Ultralytics
275
-
276
- Any Ultralytics function (train, predict, val, etc.) will run
277
- through SyNet with SyNet modules. The basic shell syntax is:
278
-
279
- #+begin_src shell
280
- synet ultralytics [ultralytics ARGS]...
281
- #+end_src
282
-
283
- This performs 3 SyNet-specific operations, then passes off
284
- execution to the normal Ultralytics code entrypoint:
285
- - Copy the model config from the synet zoo (synet/zoo/ultralytics) if necessary.
286
- - Set the imgsz (image size) ultralytics parameter according to the
287
- model specification.
288
- - Apply patches to the Ultralytics modules where necessary to
289
- enable proper SyNet model loading within Ultralytics.
290
- If you need to use this backend through python (instead of a
291
- shell), then the only necessary step is to apply the patches as in
292
- the following snippet:
293
-
294
- #+begin_src python
295
- from synet.backends import get_backend
296
- get_backend('ultralytics').patch()
297
- #+end_src
298
-
299
- After this point, you are free to use SyNet models and tflites
300
- using the normal Ultralytics API, but do not try to use
301
- Ultralytics' "export" functionality to deploy to Sabre. Use
302
- SyNet's quantize instead. The resulting models will not be
303
- properly optimized and are not expected to run on our chips.
304
-
305
- We give some examples/explanations for basic Ultralytics usage
306
- here, but for any further questions about Ultralytics, you should
307
- consult the Ultralytics github page and documentation:
308
- - [[https://github.com/ultralytics/ultralytics]]
309
- - https://docs.ultralytics.com/
310
-
311
- *** Train
312
-
313
- The SyNet repository provides a thin wrapper around Ultralytics
314
- training for simple training situations. The basic usage is
315
-
316
- #+begin_src shell
317
- synet ultralytics [OTHER ULTRALYTICS ARGS]
318
- #+end_src
319
-
320
- For instance, if you want to train a person detect model, you
321
- can train a VGA (640x480) model for the sabre chip with.
322
-
323
- #+begin_src shell
324
- synet ultralytics train model=sabre-detect-vga.yaml data=coco.yaml
325
- #+end_src
326
-
327
- This will put all output at ./runs/train/exp. See "name",
328
- "project" and "exists-ok" in the Ultralytics docs for changing
329
- this. The above command also tries to download the coco dataset
330
- to ../datasets.
331
-
332
- For any further information, see the ultralytics documentation for
333
- training: https://docs.ultralytics.com/modes/train
334
-
335
- *** Validation
336
-
337
- Validation will be performed during training, but only on the
338
- validation set, and only with the floating point (non-quantized)
339
- model. In order to use ultralytics to run validation on your
340
- quantized (.tflite) model, you will need to specify the model, the
341
- task, the dataset split, and the canvas size. Additionally, if
342
- you want to use SyNet's advanced metrics tools, you should be sure
343
- to cache the results of model evaluation by passing 'save_txt' and
344
- 'save_conf' like so:
345
-
346
- #+begin_src shell
347
- synet ultralytics val model=runs/train/detect/weights/best.tflite split=val task=detect save_txt=True save_conf=True imgsz=640,480 data=coco.yaml
348
- #+end_src
349
-
350
- This should place the results of model evaluation in
351
- runs/val/detect, which you can point to when calling "synet
352
- metrcis" (see above). For more information, see the ultralytics
353
- documentation for validation:
354
- https://docs.ultralytics.com/modes/val
355
-
356
- *** Predict (for demos)
357
-
358
- You can use Ultralytics' Predict to infer the model on an input
359
- and optionally generate visualizations. For example, you can see
360
- the results of the model on your webcam stream with:
361
-
362
- #+begin_src shell
363
- synet ultralytics predict model=vga/detect/finetuned.tflite source=0 imgsz='[480,640]' show=True iou=.3 conf=.5
364
- #+end_src
365
-
366
- Breaking this apart: You are calling SyNet with the ultralytics
367
- backend in predict mode. You are passing predict the path to your
368
- model (tflite in this case), telling it to run from a webcam
369
- (undocumented in Ultralytics, but this is source=0), setting the
370
- image shape (ultralytics cannot infer image shape from tflite),
371
- telling it to generate a graphical display, and specifying iou and
372
- confidence thresholds. For more information, see the ultralytics
373
- documentation: https://docs.ultralytics.com/modes/predict
374
-
375
- * Contributing
376
-
377
- ** Test Suite
378
-
379
- Please run the test suite before pushing ANY changes upstream. To
380
- do so, ensure that you have the development dependencies by
381
- installing synet with the [dev] set of optional dependencies.
382
-
383
- #+begin_src shell
384
- pip install -e ...synet[dev]
385
- #+end_src
386
-
387
- Then run the following in the synet root folder (the directory
388
- containing the "synet" folder):
389
-
390
- #+begin_src shell
391
- pytest -v
392
- #+end_src
393
-
394
- If you notice that a bug is present despite the tests passing,
395
- please consider adding an appropriate test case in the 'tests'
396
- folder: https://docs.pytest.org/en/latest/getting-started.html
397
-
398
- ** Docstring Style
399
-
400
- Docstrings conform to numpy, scipy, and scikits docstring
401
- conventions: https://numpydoc.readthedocs.io/en/latest/format.html
402
-
403
- ** Imports
404
-
405
- Only quantize.py and tflite_utils.py should import TensorFlow at
406
- the top of the file. Otherwise, TensorFlow modules should be
407
- imported at the beginning of functions where they are used. This
408
- ensures TensorFlow is only loaded when strictly necessary.
409
-
410
- Only backends/ultralytics.py should directly import anything from
411
- ultralytics, and backends.ultralytics should only be accessed by
412
- obtaining the ultralytics backend from backends.get_backend().:w
413
 
 
1
  ---
2
+ title: SR100 Model Compiler Demo
3
  emoji: 🏃
4
  colorFrom: purple
5
  colorTo: pink
 
12
  - read-repos
13
  - write-repos
14
  - manage-repos
15
+ short_description: Compiles a tflite model on to the SR100
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
16